WHY PRIVATE CLOUD STILL MATTERS TO AWS

The term “cloud” is household name these days and it’s thrown around so much you would think that everyone is using it and all IT runs from the cloud.

However, the reality is far different. There are countless statistics out there that show just how small annual cloud spend is when compared to traditional IT spending, in fact cloud spend is a mere rounding error in the overall enormity of IT spending.

What is interesting is the fact that as cloud spend increases at the expense of traditional data center investment, private cloud spend is actually increasing. This is where a company buys and owns the equipment, but the equipment is hosted in a cloud provider’s facilities.

But why is this happening?

More and more businesses want the flexibility of cloud computing, but require the ownership and control of their data. Many institutions, particularly financial ones, have stringent requirements that their sensitive data is kept on site and never in a public cloud.

This means that terms like “hybrid cloud” are being used more frequently to describe a strategy of storing non-sensitive data and work loads in the public cloud and keeping low-latency applications or those with sensitive data concerns locally in the private cloud.

So what does this mean for the major public cloud providers like AWS, Microsoft and Google? Well naturally, they were never going to pass up an opportunity for more growth.

Microsoft offers the Azure Stack, which allows you to leverage various Azure cloud services from their own data centre and for highly regulated or more cautious organisations. Applications can be built for the Azure cloud and deployed either on Microsoft cloud infrastructure or within the confines of their own datacentre without rewriting any code.

Google has Anthos, which lets you manage Google’s Kubernetes Engine workloads running on third-party clouds like AWS and Azure, meaning you have the freedom to deploy, run and manage your applications on the cloud of your choice, without requiring administrators and developers to learn different environments and APIs.

Then there is AWS with Outposts, which was announced back in late 2018 and was only made generally available in December 2019. Outposts is a fully managed service where AWS delivers pre-configured hardware and software to the customer’s on-premise data centre or co-location space. It’s designed to run applications in a cloud-native manner, without having to operate out of AWS data centres.

Outposts offers main services such as EC2, EBS, VPC, RDS, ECS, EKS, EMR and coming later this year S3. Now you can run complex AWS applications in your own data center and manage it from the same console as your AWS cloud based applications, which is a total game changer.

The only downside is the cost. Outpost development units start at $225k for a three year term and deliver four m5.12xlarge instances and 2.7TB of storage. Production units running 2 m5.24xlarge, 2 c5.24xlarge, 2 r5.24xlarge and 11+TB of storage start at $483k.

I’m now waiting for AWS or other private cloud providers to offer Outposts as a service (OaaS anyone?) for those of us that can’t justify these prices for our own applications.