You don’t need to go to the movies to get pumped up about superheroes. Migrating from a traditional data center to the cloud completely changes the way you interact with your IT environment, turning you into a real-life superhero. The cloud computing gives you the ability to be more agile, more efficient and more scalable, like an IT professional with superhuman strength.

OK, so maybe it’s not the same as the kinds of heroes we see on the big screen. Obviously, Captain Efficient, Super-Scalable Man and the Incredible Agile aren’t likely to appear in any movie theaters in the near future. However the transformation from traditional data center to the cloud does make for a compelling story. Picture it…

“Our hero, an overworked, underpaid IT manager responsible for an aging, ineffective and resource intensive environment, accidentally swallows a radioactive USB stick and develops superhero cloud-related powers and migrates everything to a secure, fast and cost-efficient network in the sky and finally gets a promotion.”

Sounds like a great story, right? And it’s one I witness daily (minus the radioactive USB stick). Yet, like most great movies, the sequel is seldom as good as the original. And that’s another story that I also witness on a regular basis.

As soon as businesses get their infrastructure in the cloud and start to realize the benefits in terms of management overhead, performance, scalability and cost, it seems like there’s a tendency to sit back, relax and enjoy it–which does not make for captivating viewing. Here’s what the movie looks like now…

“Our hero, now a senior cloud manager, gets to work at 9, sends a couple of emails, has a long lunch, surfs the web, goes home at 5. Repeat.”

We go from transformative change to static inaction. This approach is a fall back to the days and ways of the traditional data center, when it made sense to sit back and rest on your laurels for a while. That’s no longer the right approach. Once you’re in the cloud, you don’t need to wait 3 to 5 years before embarking on a redesign and replacement of your IT infrastructure. It can be done at any time to improve performance. I’m not advocating change for the sake of change, but the Amazon Web Services (AWS) cloud is constantly evolving, with improvements made daily to services old and new. Today’s IT superhero can be keeping up with those daily improvements with ease.

The improvements can be as simple as the addition of a new configuration option, or as exciting as a brand-new, ground-breaking service, or anything in between. Whatever it is, it can be tested and implemented right now and not months or years into the future.

Here are four examples of things our hero could be doing right now to further improve an AWS environment:”


AWS is constantly updating the EC2 instance types available for your virtual machines. This is a win-win situation for users of EC2 as each new instance type that is released is not only more powerful, but is also cheaper than the previous iteration.

In recent months, the t3 and m5 instance types have been released to supersede the t2 and m4 types respectively, and all that needs to be done to take advantage of these new, improved instance types is a simple restart of the EC2 instances.

This is an easy way to improve performance and reduce the cost of your AWS spend.


ELBs have been around for a long time. They were originally launched with the Classic ELB, but in recent years have been updated with the Application and Network versions.

The Classic ELB was purpose built for the EC2-Classic instances, which was the old AWS standard before the introduction of EC2 VPC. However, I still regularly see Classic ELBs in use when they should be updated to the new versions, which have been improved specifically to work with HTTP/HTTPS (Application) or TCP (Network) applications.

Switching to the new versions does require a bit of work, but nothing too extreme and you will be rewarded with an improved ELB and a slightly reduced hourly cost.


Almost everyone that uses AWS uses Amazon S3, the service that provides an almost unlimited amount of storage space. However, S3 isn’t the only class of storage available. Depending on your requirements, there are different classes available for different use cases.

The latest class to be released is S3 One Zone-Infrequent Access, which is for data that does not require the availability and resilience of S3 Standard or S3 Standard-IA storage, but needs to be available rapidly. It is used for storing things like secondary backup copies, or for S3 Cross-Region Replication data from another AWS Region.

Then there is Glacier, which is the low-cost storage class that provides high availability and resilience but with slower retrieval speeds. It is perfect for archiving your old data for safekeeping.

As your data ages, it can automatically move between these storage classes using Lifecycle Policies with no effort on your part.


EFS allows you to create huge file systems that can be accessed in massively parallel fashion from multiple EC2 instances and other AWS resources. It is a great product. However, because throughput is based on the size of the file system, it has been known to suffer from performance issues when using small amounts of data and high I/O. When EFS volumes get overloaded, workarounds such as creating large files on the file system to increase I/O were the only solution.

Not anymore, as AWS has recently implemented a new feature called Provisioned Throughput that allows you to predefine the level of I/O you require no matter the size of the volume.

These are only four examples of the many options available for seasoned users of AWS to improve their environments. What better way for our hero to write a better script for the next installment of the cloud superhero franchise and beyond than by staying on top of these regular improvements! Unfortunately we probably won’t get to see Super-Scalable Man (or Woman) in a blockbuster movie any time soon…