This abstract was originally published as TechTarget: What’s behind cloud providers’ push for custom hardware? by George Lawton on SearchCloudComputing.
Initially, public clouds were built on generic hardware to cut costs and operate at massive scale — but that’s changing.
A shift toward highly scalable AI and machine learning workloads, as well as IoT and analytics applications, is driving cloud vendors to consider new architectures. Legacy chip and hardware manufacturers are attempting to bring these capabilities to market. But the major cloud vendors have increasingly taken matters into their own hands because those manufacturers can’t keep up with their needs.
Cloud providers have capitalized on the demand for innovative software models and platforms that can support large data volumes. This has been the main driver behind the move toward custom cloud hardware and hardware-based features.
“There’s also a much higher demand for increased computing power at lower costs, which fuels hardware innovation from public cloud providers as much as new software services,” said Jeff Valentine, CTO at CloudCheckr, a cloud management platform.”
As cloud consumption grows, public cloud providers can only operate efficiently in one of two ways. Either they shoehorn commodity hardware into their data centers to try and accommodate their unique needs, or they design and develop something internally instead. Public cloud vendors are using custom hardware for improvements to availability, performance, security and cost, Valentine said. And a more secure and reliable infrastructure could ultimately attract and retain more customers.
In the early days of cloud, one of the first issues providers ran into was density and cooling. Data center space was expensive, and cooling was a big concern. Providers mounted motherboards onto racks and ran specialty fans across them to cool everything appropriately.
“We’ve advanced a lot since then, but public cloud providers haven’t stopped trying to squeeze out every efficiency they could,” said Valentine.
The focus today is mostly on how to operationalize the infrastructure. If Microsoft, Amazon or any other cloud provider can make its infrastructure super-efficient, it can theoretically pass those savings on to customers through lower prices.
But cloud data centers operate much differently than the typical enterprise facility, which presents unique challenges for vendors. For example, commodity hardware can update firmware through software, but shared-use servers must be specifically configured to disallow that. Instead, these vendors must roll out updates when they can safely be provisioned to the hardware BIOS. It’s a pain for the public cloud staff, Valentine said.
As a result, AWS developed a Nitro security chip so firmware can be updated by AWS — and only AWS. This saved AWS time and effort, but these types of behind-the-scenes efforts will largely go unnoticed by customers, at least directly.
“The reality is that most customers will only notice the cost,” said Valentine.
Read the full article on SearchCloudComputing.