shadowy office
Article Azure May 2, 2017

Revealed: The 7 Hidden Costs Every Public Cloud User Needs to Avoid

The pay-as-you-go model of the cloud is transforming the way organizations need to think about their IT resources. Traditionally, IT infrastructure had been a capital expenditure, so you didn’t need to worry about infrastructure costs on a day-to-day basis.

But the cloud has shifted the goalposts. It has now become an operational expenditure. And this calls for a new and different approach to IT resource consumption, where you need to be far more cost aware.

Yet old habits die hard. As a result, enterprise cloud users far too often provision more infrastructure than they actually need, leading to unnecessarily high monthly cloud bills.

But correctly sizing instances and provisioned storage is only one side of the story. Other charges often go unnoticed; you need to be aware of other costs lurking deeper in your monthly cloud bills.

In this post, we look at seven of the most common hidden costs every public cloud user needs to avoid, focusing in detail on the two leading public cloud vendors, AWS and Azure.

1. Data Transfer Charges

Data transfer charges often come as a major shock to new cloud customers when they receive their first monthly bill. And keeping those costs down can present significant optimization challenges.

Transferring data into your public cloud infrastructure is generally free. But transferring it out is another matter.

Outbound transfer charges vary from region to region. So, when choosing where to host your data, you’ll need to strike the right balance between cost and proximity to your customers and users.

In addition, you should consider offloading the work of handling traffic to your vendor’s content delivery network, such as AWS CloudFront and Azure Content Delivery Network. Not only can this potentially reduce overall data transfer costs, but will also speed up delivery of your data.

You should also beware of charges for data transfer between regions and zones. This is particularly important when considering the cost impact of a fault-tolerant architecture that replicates data to different locations.

And finally, bandwidth charges may vary according to what services you use. Closely examine your vendor’s charging structure and build the cheapest data transfer routes into your application architecture.

2. Unused Instances

You wouldn’t leave a car running idle on your driveway overnight—so why would you want to leave unused instances running in the cloud?

Quick and easy resource provisioning is one of the key advantages of on-demand pay-as-you-go cloud computing. You can order new infrastructure at the click of a button, slashing weeks or months off the time it takes to get new IT projects off the ground.

But with this convenience comes a cost. In the same way it’s easy to spin up virtual machines, it’s just as easy to forget about them. So it’s essential you keep track of all your instances. That way, you can identify and shut down unused machines and avoid racking up unnecessary costs.

With Microsoft Azure, it’s also important to be aware that, when you stop a virtual machine, your provisioned compute and network resources are still reserved. This can be a costly mistake if you no longer need your machine, as you’ll continue to pay for your reserved capacity.

To shut it down altogether you should both stop and deallocate it— in order to free up your resources and release you from the associated charges.

3. Unhealthy Instances

Unhealthy instances that go unnoticed don’t just waste money, but also undermine the performance of your applications. Make sure to put measures in place to identify unresponsive instances and replace them with healthy ones.

One of the most common ways to deal with unhealthy instances is to use auto scaling—a horizontal scaling feature supported by both AWS and Azure.

With auto scaling, you configure your resources into a cluster of instances dedicated to your application. Auto scaling then automatically adjusts the number of running machines in your cluster based on the conditions you define, such as whenever CPU utilization goes above or below a certain threshold.

This means you can fine-tune your cloud resources to demand. But the added benefit is that many auto-scaling services are also able to identify and replace unhealthy instances, helping you to keep wasted infrastructure to a minimum.

4. Unattached Persistent Volumes

When you terminate an Amazon EC2 instance, by default, your root volume is also deleted—but any attached EBS volumes are still preserved. So, unless you set a policy where all attached EBS volumes are automatically deleted as part of the termination process, you could end up racking up charges on unattached volumes that provide no useful purpose.

This can be a particularly expensive oversight if you leave SSD-backed volumes sitting around, as they cost more than double their HDD counterparts. What’s more, in the case of Provisioned IOPS SSD volumes, you continue to pay for provisioned IOPS, which is charged separately from storage.

Just as with AWS, your attached storage in Microsoft Azure isn’t automatically deleted when you terminate your virtual machine.

You can identify and remove unattached volumes in the command line interfaces (CLIs) provided by both AWS and Azure. However, this can be a laborious undertaking if you have a large number of deployments across a multi-cloud environment. It’s far more efficient to implement some form of cloud inventory monitoring system to help you clean up your unattached volumes.

5. Orphan Snapshots

Whenever you terminate an instance, not only do you need to deal with unattached volumes, but also any orphaned volume snapshots.

Both leading platforms provide point-in-time backups of persistent disks. However, on Microsoft Azure, the feature is only available to the vendor’s new storage service for virtual machines, Managed Disks, which works on similar lines to EBS.

AWS backs up EBS volume snapshots to S3. What’s more, they’re both compressed and incremental, reducing your storage footprint. Nevertheless, your initial snapshot is of the entire volume. Moreover, if you take regular subsequent snapshots then, depending on your retention period, you could require as much capacity as the initial snapshot.

Azure, on the other hand, only supports full point-in-time snapshots. This could work out particularly expensive, both during the lifetime of your disks and after you finished with them, as you can create as many snapshots as you want from the same Managed Disk. This should be all the more reason to delete snapshots left behind by terminated machines.

6. Unused Static IP Addresses

Static IP addresses are a limited resource. So public cloud vendors levy a charge on unused static addresses to encourage customers to use them efficiently.

If you’re an AWS user, you get one free Elastic IP (EIP) address with each running EC2 instance. But any additional addresses come at a small hourly cost. You’re also charged for any address that’s not associated with a running instance.

On Microsoft Azure, if you use the Azure Resource Manager (ARM) deployment model then you get your first five static public IP addresses in each region for free.

Likewise, if you deploy your workloads using Microsoft’s Classic (ASM) model then each of your first five reserved IP addresses is free—provided it’s associated with a running virtual machine. Otherwise it incurs an hourly charge.

However, in both deployment models, beyond your first five free addresses, each additional static or reserved IP address is also charged at a small hourly rate.

The bottom line is that, whatever cloud platform you use, if you’ll longer need your unused static IPs then you should release them.

7. Underutilized Discounted Capacity

Both AWS and Azure provide discount plans on prebooked capacity, which offer significant potential savings over on-demand pricing in exchange for a payment commitment over a fixed term.

The problem with both plans is that, while they promise substantial reductions on your cloud bills, you need to use them properly to make them worthwhile. This is because they work on a credit system. This credit is against the machine type you specify throughout the duration of the plan.

In the case of Microsoft’s discount program, the Compute Pre-Purchase Plan, the term is fixed at one year. But for Amazon’s Reserved Instances, you can choose between either one or three years.

To maximize your discount, you need to make full use of the credit throughout the entire term. That means making sure matching instances, which are entitled to the credit, consume it as much as possible.

You can do this by keeping track of your discounted capacity and assigning suitable workloads to it. With AWS, you can also take advantage of various options to modify the machine specifications of your Reserved Instances, giving you more scope for resource matching.

The Compute Pre-Purchase Plan works slightly differently and breaks your credit down into 744 hours of usage each month for the instance type you specify. This credit can apply to any actively running machine that matches your specification. By contrast, on AWS, each Reserved Instance credit can only be used by one matching machine at any one time.

No Time to Waste

Success in the cloud relies on quick optimization decisions. And the longer you put them off, the more your costs will continue to pile up.

So start taking action straight away.

Tag your resources so you can keep track of them. Monitor your deployments. Get a deeper understanding of how your public cloud platform works. And use tools that give you visibility and a rationalized view into your complex cloud environment.

Make sure you nip those hidden cloud costs in the bud—otherwise you’re simply throwing money down the drain.

Sign up for a free 14-day CloudCheckr trial to see how we can start saving you money on your cloud investment immediately!

Related Resources