AWS’s crazy, genius plan to haul your data to the cloud

2016 was certainly a productive year for Amazon Web Services. Apart from storming the cloud and dominating the sector, they redoubled their efforts to get every last soul and byte to the cloud. Enterprises that have already migrated to the cloud are typically companies with data that was easily transferred via data connections. But there are still a number of large global conglomerates with legacy systems worth billions of dollars who just don’t see how they can transfer their immense amounts of data to the cloud, especially with the amount of time and money that has already gone into building and fine-tuning their legacy systems. These companies are in Amazon’s cross-hairs.

As No. 1 in cloud-computing services, AWS is coming up with out-of-the box ideas to support its single-minded focus of leaving no one behind in the migration to the cloud. Because its new target is giant corporations like oil and gas companies, pharmaceutical companies, or those in logistic businesses with loads and loads of data, Amazon decided the best way to transfer such large volumes of data to the cloud was to physically transfer it. And that’s exactly what operation Snowball is all about.

Dashing through the Snow

AWS Snowball is a service that helps enterprises transfer large amounts of their data to the cloud with the help of specially designed physical storage appliances that bypass the Internet. Just to be clear, this service is for customers who have preferably over 10TB of data to transfer. These well-designed appliances look like a high-tech safety deposit box that’s about the size of a computer cabinet and has been drop-tested and designed especially for shipping.

Breaking up your data into Snowballs and transferring them to the cloud this way not only saves companies time and money when compared to transferring data over the Internet, but also a heightened sense of security as the Snowballs are protected by AWS Key Management Service.

Though the initial size of the Snowball was 50TB, AWS was quick to increase that limit to 80TB, and with enforced encryption this is really a viable option for enterprises to get their huge data stacks to the cloud that would normally be close to impossible with regular data transfers over the Internet. A standalone downloadable Snowball client can be used to migrate your data to and from the cloud. Additionally, jobs can be managed through the AWS Snowball Management console.

Snowball Edge

An upgraded Snowball called Snowball Edge was announced at last year’s AWS re:Invent conference. Snowball Edge is a 100 TB data-transfer appliance that has both storage and compute capabilities. Apart from minimizing setup and integration, the Snowball Edge appliance connects to existing legacy systems using standard storage interfaces and can cluster together to form a local storage tier that can process your data on site. This ensures your applications keep running even without access to the cloud. You can think of the Snowball Edge as a really advanced 100TB external drive that can store large data sets or support local workloads in remote or offline locations.

A cool feature of Snowball is that it uses a Kindle device as a shipping label that updates automatically at every interval. Once you’re done transferring data, and the appliance is ready to be returned, the Kindle updates automatically, vastly reducing the chance for error in shipping and lost devices.

As far as processing power goes, you can deploy AWS Lambda code on Snowball Edge to perform analysis or query data locally. Data on the Snowball Edge appliance can be stored and processed independent from any other computational device or interference.

First class passage to the cloud


With the success of the hybrid cloud, AWS seems to be testing the waters with its Snowball appliances, and the waters seem to be just right. In its venture to lower barriers to the public cloud for potential customers, AWS is taking aim at the huge segment of large enterprises either facing geographic or bandwidth challenges in migrating their data to the cloud. When you talk about enterprises with millions of gigabytes in satellite imagery, or underwater oceanographic imagery, for that matter, the limitations of public cloud connections pose a serious challenge not just for data transfer but for real time analysis. IoT devices are notorious for generating massive amounts of data. Though it would be great to use AWS’s services to analyze these data sets, high latency and low bandwidth make this close to impossible.

The other part of the story is that these particular customers with the high-data volumes are the ones without budget constraints. Giant amounts of data usually mean these companies are the biggest of their kind, and can afford first-class tickets to get their precious data to the cloud. And that’s exactly the tickets AWS is printing for them.

AWS Snowmobile: The data-transfer juggernaut

AWS Snowmobile looks like the craziest announcement from AWS this year and is one of the few instances where a newer technology gives way to an older one. And by older technology we’re referring to taking your data, loading it in a truck and driving it down the highway at 80 miles an hour. The pure ingenuity of this idea is evident by the fact that AWS is the only one doing it; no other cloud companies are even considering these measures to transfer data to the cloud. However, if you look at it from the point of organizations with deep pockets that are trapped with huge volumes of data and no feasible way of getting it to the cloud, there really couldn’t be a better option.

It’s strange that the top cloud vendor in the world is resorting to good old-fashioned gasoline-powered semi-trailer trucks to transfer data to the cloud, and this is a great example of how old technology comes into play when we least expect it. No one would have guessed that the hybrid cloud would be the cloud option of choice for the enterprise and no one in their wildest dreams would have thought of physically loading data in containers and actually shipping them. Just goes to show that all the analysis and technology in the world can’t effectively predict where new technology and inventions are taking us.

This is a classic case of the phrase used in Francis Bacon’s Essays in 1625, where he said “if the mountain won’t come to Mohammed, Mohammed will go to the mountain.” Here, if the enterprise can’t get its data to the cloud, AWS will bring the cloud to the enterprise on an 18-wheel 45-foot truck. It’s not exactly clear how many big enterprises are interested in using this 100 petabyte option — which is effectively a million gigabytes of storage — but DigitalGlobe is one of Amazon’s first customers to use the Snowmobile and is currently transferring 100 petabytes of satellite imagery to the cloud using this service.

Even with a 1 Gbps connection, uploading 50 TB of data to the cloud takes about four days on average. At the same speed, 100 petabytes would take the better part of 25 years to transfer. All in all, this has been a masterstroke from AWS. It’s left to be seen if other cloud vendors like Microsoft and Google are going to resort to these out-of-the-box and seemingly ridiculous — but effective — measures to get their clients’ data to the cloud.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top