For the last decade, Salesforce.com CEO Marc Benioff has promoted his pet idea that traditional application software was destined for obsolescence. ktvu
He was a few years early, but Benioff understood computer history better than his detractors.
Most of the hosted on-demand application vendors, or ASPs as we called them back then, crashed and burned. Not only did they burn through money at a frightful clip, but the technology they used was thin, relying on single-tenant, non-scalable computing architecture models that left a trail of dissatisfied customers.
In the post-Internet bubble world, however, the proliferation of cheap hardware combined with an abundance of Internet infrastructure created ripe conditions for Benioff and others to figure out how to do it the right way. Oracle, Microsoft and other big software makers weren't in immediate peril, but they caught on to the new reality: More customers were accessing the Internet to subscribe to programs like customer management software
With roots in computer clustering and grid computing, the technology that first sprouted during the ASP era of the late 1990s is now the computing topic du jour. There's understandable reason for the excitement but advocates of cloud computing now have to battle the inevitable hype that attends any major technology shift.
So what is cloud computing? The definition game can lead you down a rabbit's hole. After all, isn't the Web itself a form of cloud computing? As Greg Cruey noted, we're all accessing Web pages that reside in the cloud. But the buzz in 2009 about cloud computing isn't so much about a computing architecture as it is a style of computing.ktvu
For the IT world, the promise is a a faster, easier, and more affordable way to provision computing resources. Gartner thinks about cloud computing as a system where massively scalable IT capabilities would get delivered as a service. The important advantage for enterprise-level customers is that would have the ability to scale up and down, depending upon the amount of computing resources they might need.
The pay-as-you-go model embraced by Amazon and a host of others is one approach. Another is the platform as a service model embodied by the likes of Force.com from Salesforce and AppEngine from Google. And, of course, there are the myriad end user apps which reside in the cloud used by hundreds of millions of people each day.
"What is cloud computing? " said Tien Tzuo, the CEO of Zuora, a start up which specializes in subscriptions as a service. "Anything where you don't need to own your own physical infrastructure--simply write your code and deploy it on someone else's servers."
Tzuo points to a confluence of factors which have helped usher in the change. Bandwidth is finally everywhere, the security and privacy issues around storing data online don't raise as many hackles among individuals and companies (though they still linger), and the widespread adoption of technologies like open source means that inexpensive hosted software components are now ubiquitous.
You see what that means in the field every day. A company no longer needs to buy software--or a big data center, for that matter. Instead, it can launch applications by choosing among different types of Internet infrastructure, such as AppEngine or Salesforce.
That's a big deal in an economic downturn, when a lot of start-ups in business simply are too strapped for funding to divert money to buy and staff their own computing infrastructure.ktvu
So how did we arrive here?
Like most technology transitions, this was a gradual evolution with antecedents in attempts to move beyond EDI toward a world of Internet scale distributed computing with Web services. Much of the 1990s was dominated by esoteric debates over alphabet soup-style technical standards to help further this along. Then computer scientists Ian Foster, Steven Tuecke and Carl Kesselman authored a paper on how to extend the clustering concept. (Clustering was a popular IT technique that allowed a system to automatically decide which CPU should run a particular piece of code.) In practice, their road map for the grid was akin to a metered utility service where a company plugs into the electricity grid and pays only for what it uses.
But before moving off the drawing board, lingering infrastructure issues still needed to get resolved. Paul Wallis, the CTO of Stroma Software, has a very good analysis summing this up:
One of the hurdles that had to be jumped with the move from clustering to grid was data residency. Because of the distributed nature of the Grid the computational nodes could be situated anywhere in the world. It was fine having all that CPU power available, but the data on which the CPU performed its operations could be thousands of miles away, causing a delay (latency) between data fetch and execution. CPUs need to be fed and watered with different volumes of data depending on the tasks they are processing. Running a data intensive process with disparate data sources can create a bottleneck in the I/O, causing the CPU to run inefficiently, and affecting economic viability.
Storage management, security provisioning and data movement became the nuts to be cracked in order for grid to succeed. A toolkit, called Globus, was created to solve these issues, but the infrastructure hardware available still has not progressed to a level where true grid computing can be wholly achieved.ktvu
By the early part of this decade, some of those infrastructure issues began to get resolved with the emergence of huge data center services. Cloud implementations adopted by Amazon and others featured the grid idea's payer-per-use concept. It also proved a boon to small developers who now did not need to own their own physical infrastructures. They simply could write their code and then deploy it on someone else's servers
Of course, nothing in computing moves in a smooth progression from A to Z. And with the emergence of cloud computing have come calls to standardize both the APIs as well as the platform services which underlie those services. Otherwise, some caution, you run the risk of cloud computing vendor lock-in. Microsoft's Dare Obasanjo put it bluntly in a post on the topic last fall:
The APIs provided by Amazon's cloud computing platform (EC2/S3/EBS/etc) are radically different from those provided by Google App Engine (Datastore API/Python runtime/Images API/etc). For zero lock-in to occur in this space, there need to be multiple providers of the same underlying APIs. Otherwise, migrating between cloud computing platforms will be more like switching your application from Ruby on Rails and MySQL to Django and PostgreSQL (i.e. a complete rewrite).
If history is a guidepost, most of these petty disputes will get smoothed over in time. Not so much because the vendors will feel compelled to do the right thing but because customers will force them to act in their enlightened self interest. The more difficult question to consider is how long it will take before businesses regularly tap cloud services to make money. That's when you'll know it's become part of the computing mainstream.
ktvu
3.29.2009
Author: tygoogle Time: 3/29/2009
Subscribe to:
Post Comments (Atom)
0 Comment:
Post a Comment