Elastic computing
![]() | This article contains promotional content. (January 2010) |
Elastic Computing is about harnessing the power of software and hardware computing resources available to todayʼs information technology organizations. The creation of massive, scalable, internet-sized applications now becomes possible at a Total Cost of Ownership that is dramatically lower than any traditional data center, or hardware-only cloud-based deployment can achieve. Elastic Computing also aims to improve the application development lifecycle by proposing a methodic and disciplined approach to complex, heterogeneous system design and deployment. It allows applications to be developed more quickly and readily accommodate future change in policy and architecture. It engages the application developer to focus on building applications rather than spending his time specifying the underlying application platform.
Relationship to Utility Computing
Elastic Computing not only leverages the value of Utility Computing, but also overcomes its shortfalls and then goes beyond. In essence, Elastic Computing is a synthesis of evolutionary concepts that have emerged on the enterprise computing scene over decades prior, adding the missing components that are essential to fulfill the promise of a true on-demand application computing stack at pay-per-use prices. The resulting value of Elastic Computing is enormous. Key benefits include:
- Up and down scalability of both infrastructure software and hardware resources to manage the peaks and troughs that are typical of real-world application usage patterns. In addition, automated system de- ployment, scaling and monitoring of this infrastructure allows application developers and supporting IT personnel to focus on higher value-add areas within the application life-cycle.
- Decoupling the application design specification from the deployment specification to (a) drive best practices in system architecture and design, (b) enable experimentation with system design and testing as a whole, (c) allow the application to leverage software improvements as they emerge, and (d) simplify support for versioning and application migration.
- On-demand utility pricing, which delivers a sensible approach to pricing that gives CPU, storage and bandwidth a normalized unit of measurement that incurs accumulative charges based on actual usage.
The net result is that Elastic Computing delivers a seamlessly scalable application infrastructure stack that allows organizations of all sizes to quickly and easily plug their applications in to available Utility Computing environments on a pay-per-use basis.
History
This history of Elastic Computing starts with the development of CPU and storage as the foundation for all computing technology, through to the rise of Utility Computing, a trend that has been heralded as a revolutionary change to how IT is planned and implemented. The driving force behind each new developmental trend is the underlying need for greater computing efficiency achieved through optimized system utilization.
CPU & Storage
The hardware layer provides the foundation for all computing technology. Dramatic increases in both CPU power and the capacity of the magnetic drives that feed data to the CPUs have made computers faster, and computing hardware increasingly inexpensive.
The result, however, led to over-planned, over-capacity-enabled data centers that consumed excessive power and cooling resources, creating the demand for greater system utilization and efficiency. This paved the way for Virtualization.
Virtualization
Virtualization is broadly defined as the abstraction of computing re- sources. It involves taking a big server (or servers) or a big disk (or disk farm) and breaking down these large blocks of resources into smaller, independent, logical devices that can be accessed, administered, manipulated and consumed as independent entities. It addresses the under-utilization problem of a large machine that runs one set of applications configured in a specific way by letting fewer of that machineʼs resources remain idle. Virtualization has given the consumer choice in the way existing hardware is used, and has improved usage of computing resources. However, servers are not always too big. Rather than being split apart, in some instances they can be too small to handle a particularly large com- puting task. Thus, multiple servers need to be combined to process the large-scale computing needs of applications serving processes such as video transcoding or a Google search. This large-scale computing need led to the development of Grid Consolidation.
Grid Consolidation
Grid Consolidation applies the resources of many computers on a network simultaneously to a given computing task. Grids have been made to scale dynamically by allowing for the addition of resources to the running grid in a (sometimes) simple and transparent manner. Until recently, they were usually an “all or none” proposition requiring a large upfront investment in hardware and a re-architecture of existing applications. However, a way emerged of selling the pieces of large, monolithic, expensive grid-based systems in smaller parts by both selling access to multi-tenant applications and chunks of large, consolidated computer systems.
Utility Computing
Utility Computing leverages virtualized hosting environments and grid computing, with varying degrees of on-demand pricing, to allow users to purchase only the computing power and storage they require. It is heralded as a revolutionary change to how IT is planned and implemented. An important element of Utility Computing, the “On-demand” component has become a popular computing model that bears a close resemblance to the way we use common, everyday products and services. Every time one “demands” that a light be turned on by flicking a switch or one picks up a telephone to hear dial tone, one is using a “non-computing” on- demand service. True on-demand computing follows the same pricing model. However, many hosted software and hardware services marketed as “on- demand” do not follow the utility-based pricing model. Instead, they employ a per-user, per-month or per-day(s) unit of measure and frequently require monthly or yearly purchase commitments. True utility computing should follow the model that if no computing power is being used, the consumer is not charged. When a demand for computing is fulfilled, the consumer is charged only for the resources he has consumed.
See also
Bibliography
- Perez et al: Responsive Elastic Computing, International Conference on Autonomic Computing, ISBN 978-1-60558-578-9
External links
- Utility Computing - Meet Amazon Elastic Compute Cloud [1]