Improved hardware utilization, simplified use, and uninterrupted operations (even during upgrades): A smoothly-run data center is more efficient. That’s essential for data center operators and also helps customers. Depending on the cloud-solution different technologies are relevant.

Just 10 years ago, data center architecture was still based on separate servers that each contained individual applications. This arrangement usually resulted in poorly utilized systems that were configured for peak loads and saw little action the rest of the time. One example is a company’s e-mail server. It is typically used in the morning with great frequency, when employees get to work and go through their e-mails.

To ensure that no wait time is created when working through their e-mails, the server has to be configured for such peak loads.

The rest of the day, the server’s utilization is generally less than 25%. Now let us imagine that this is the case at a company with several business locations throughout Europe. The company could spare itself the cost of having a large e-mail server at each of its branches in Russia, Germany, and Ireland – if only it could improve utilization. The same problem, but even more relevant to the business realm, applies to ERP and accounting systems, which have to be configured for peak loads during seasonal business periods or when quarterly and annual financial statements are prepared. Most of the year, though, capacity is left untapped.

Not only is this costly, it also makes hardware maintenance and replacement difficult, due to operational disruptions that have to be carefully planned. In addition, business-critical applications require backup systems. These are also designed to handle peak loads and remain unused the rest of the time.

Virtualization

The key technology that can help eliminate these problems is called virtualization. It is based on the principle that hardware is decoupled from the operating system and applications. Virtual machines (VMs) are installed on the physical servers and share the hardware environment. In this way, the hardware of the individual applications no longer has to be configured to peak loads. Instead the entire server and storage pool is available to all applications in the company.

Take the aforementioned company with branches in Russia, Germany, and Ireland as an example. The three virtual e-mail servers could run on shared hardware, and since the employees, working in different time zones, would access their e-mails at different times, one single computer could be scaled to support the maximum utilization of all three e-mail servers, one after the other.

The average utilization of the underlying hardware thus increases in comparison to individual computer usage. While resources for individual systems are measured based on the maximum expected load, one can assume that when it comes to shared use in a hardware pool, not all virtual systems will be subjected to maximum loads simultaneously.

This set-up has several advantages:

  • Immediate reduction of hardware costs because system utilization is substantially improved.
  • Simplified hardware maintenance because virtual machines can be moved to other servers within the server pool. This allows redundant hardware components and entire servers to be replaced without disrupting operations.
  • Better investment protection since hardware no longer always has to be state-of-the-art; instead, it can be retrofitted as needed.
  • Greater flexibility since new virtual server systems can be set up within minutes. In comparison, configuring and setting up a physical server takes several hours if not days.
  • Increased availability and reliability due to the fact that virtual machines can be re-started on another physical server in the server pool if a hardware malfunction occurs, with no significant interruptions.
  • Mobility of virtual machines: Individual virtual machines can be moved from one hardware pool to another.

Please note that HANA servers are typically not virtualized for technical reasons.

Adaptive Computing

While virtualization decouples hardware from the operating system, adaptive computing decouples the operating system from applications. Normally, virtual machines must be shut down to be rescaled or to have their operating systems updated, but adaptive computing enables one to make changes to an operating system and virtual machines, while they are running applications, with minimal downtimes.

To make this happen, one must first prepare a new VM with the required specifications. For example, this could require increasing the performance of the storage device and processor, upgrading the operating system, or replacing an over-sized VM with a smaller one. After this preparatory step, the application is simply “moved” from the old VM to the new one.

Another benefit of this process is increased reliability. If the new system does not function properly, one can always go back to the previous VM that has not yet been removed.

Multi-Tenancy

While virtualization and adaptive computing in a company’s data center ensure that business applications run more efficiently and cost-effectively, cloud providers have greater requirements. Their main concern is the ability to simultaneously run the applications of many different customers on their virtualized data center architecture.

To achieve the greatest economies of scale, cloud providers run many customers on one application instance, typically in the form of a virtualized server and storage infrastructure. Customers are assigned a tenant; this is comparable to a client for on-premise applications. Multi-tenancy occurs when many customers can be served on one instance. Depending on the application’s size and the cloud software’s requirements, one system can accommodate more than 100 tenants.

In the SAP HANA Enterprise Cloud systems are typically of sizes that do not allow for multi-tenancy so that each customer is provided its own system.

For smaller size applications, like most line-of-business cloud-solutions, the system is configured in such a manner that the tenants of the individual customers are kept completely separate, in a logical sense. Thus, a given customer is not aware of the others and is unable to access anyone else’s data or functions. Besides having customer tenants, each system also has individually configured administration tenants.

One can easily see that such multi-tenant environments, as operated by cloud providers, achieve significantly greater economies of scale and can be run more cost-effectively than a data center within a given company. Regardless of whether upgrades, patches, or hot fixes are involved, in each case these activities are carried out for customers in one process, significantly reducing the cost per customer. Cloud providers generally pass on the cost savings to the customers.

As a result, customers not only become more flexible, they also benefit from low subscription prices and save on hardware, operating, and maintenance costs associated with the application.