Storing and securing your sensitive data in AWS S3

In the last few months, we’ve focused our attention on Amazon Web Services (AWS) and how to ensure an implementation of AWS cloud services is secure. In particular, we’ve examined some tips for securing your Amazon Virtual Private Cloud and on securing Amazon EC2 instances that you have hosted in your AWS cloud environment. In this present article we’re going to take a look at some practices you should follow to ensure any data you have stored in Amazon Simple Storage Service (Amazon S3) a cloud storage service you can use to store and retrieve data for a broad range of uses. Amazon S3 is commonly used by businesses and organizations for a variety of data storage purposes including file storage, content for websites, archiving of records, and even for data recovery purposes. S3 includes a number of features for making sure your data is safe, both when it’s at rest in the cloud and while it’s in transit between the cloud and your company or customers. Unfortunately, not everyone uses these features correctly or is perhaps even aware of how they work.

Keeping your data safe when it’s sitting in the cloud is important. That’s because you should never underestimate the curiosity of the human mind, especially of those who have nefarious purposes in mind. Simply by nosing around the Internet scanning various cloud sites and services it’s been shown time and again how careless companies can be with their data. For example, only two years ago it was reported that security experts had discovered an unsecured AWS S3 bucket exposed 4 million Time Warner Cable subscriber records. Then less than a year later it was again reported that another unsecured AWS S3 bucket, this one managed by a Walmart jewelry partner, exposed the data of 1.3 million customers.

The list goes on and on. To help you prevent such a thing from happening to your own company, in this article we’ll examine some of the S3 security features and provide a few tips on using them.

Securing your data by using encryption in AWS S3

amazon s3

S3 provides server-side encryption for your data when you store it in the AWS cloud. This server-side encryption is totally transparent from the customer’s perspective and it utilizes AES-256 cryptography and generates a unique encryption key for each object you store in S3. This key is then periodically rotated and is also itself encrypted by a master key stored by Amazon in a secure location.

For additional control over encryption, however, you can also utilize the client-side encryption capabilities supported by S3. When you employ client-side encryption, you the customer encrypt your data at your end before you store it in the AWS cloud. Then when you access your stored data, it’s encrypted on the client side. To implement client-side encryption with S3 you can either use a client-side master key or an AWS KMS-managed customer master key. In either case, Amazon has no knowledge of the encryption keys involved in the process, so your data is totally secure as long as your master encryption key is protected. Special care of course should be taken on your part if you decide to use a client-side master key to ensure this key is safe and isn’t corrupted or lost. For an example of how you can use the Java API to implement client-side encryption for S3 see this AWS documentation page.

For securing data in transit, S3 uses HTTPS by default. This means that SSL/TLS encryption is used both for transferring data and also for sending S3 service management requests issued through the AWS Management Console or S3 APIs. For example, when you save, modify or fetch an object to or from an S3 bucket, both the payload (data object) and the metadata associated with the object are securely transferred using SSL/TLS.

Note that if you use HTTP instead of HTTPS either accidentally or intentionally for accessing objects in an S3 bucket, the request you made will be rejected but any encryption key associated with the request should be considered compromised and the key should be discarded and rotated.

Going beyond IAM policies for access control

Nick Youngson / Alpha Stock Images

AWS Identity and Access Management (IAM) is a core feature of AWS that lets you control access to AWS service APIs and AWS resources such as S3 buckets. IAM lets you do this by controlling who has been authenticated and authorized (has permission) to access your AWS resources.

With Amazon S3, however, you can go one step further by utilizing bucket-level and object-level permissions to provide even more security and greater granularity of access control than you would be able to achieve by utilizing IAM policies only in your AWS environment. Bucket policies utilize the same JSON-based access policy language used by IAM policies, so it’s not hard to learn how to use them if you’re already familiar with the AWS security policy language. Bucket policies let you control access to buckets and objects through permissions that can either allow or deny access to a resource. By utilizing such policies you can exert strong control over the security of your S3 data to ensure data integrity, control information leakage, and prevent unauthorized access and deletion of your sensitive business data. For more information on bucket policies and some examples of their usage, see this AWS documentation page.

As a tip to make it easier for you to use bucket polices, you should know that you can utilize the same AWS Policy Generator to create these policies as you would typically use to configure the IAM policies you used in your AWS environment.

Leveraging versioning and replication for additional security

S3 automatically replicates each object stored in S3 across all of the availability zones present within your respective region. While this is strictly an availability feature rather than a security one, it also helps ensure that your data is safe when stored in the cloud. S3 also maintains all versions of any objects you’ve modified in an S3 bucket or deleted from the bucket. While this versioning feature is also mostly to ensure you can easily recover from accidental overwriting or deletion of objects, versioning is different from replication in that it is disabled by default and must be enabled to take effect. S3 versioning once enabled lets you retrieve or restore every version of every object you’ve stored in your S3 bucket.

One important thing to note concerning versioning is that once you’ve turned versioning on for a bucket, you cannot return the bucket to its previously unversioned state. You can, however, temporarily suspend versioning on the bucket for as long as you wish. And if you want to only want to prevent objects from being overwritten or deleted temporarily for a fixed amount of time, you can use something called an S3 Lock Object to accomplish this. This might be used for example for compliance in certain kinds of regulatory environments — see this AWS documentation page for how to configure and use S3 Object Locks.

Featured image: Shutterstock

About The Author

1 thought on “Storing and securing your sensitive data in AWS S3”

  1. Mitch Tulloch

    Mitch here (the author of this article):

    I forgot to include one other notable way of securing S3 buckets. S3 Block Public Access, a new feature launched recently in November 2018, makes it easier for AWS customers to protect their buckets and objects by blocking existing public access (whether it was specified by an ACL or a policy) and to ensure that public access is not granted to newly created items. Details can be found at these links:

    https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/

    https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html

    My thanks to Erik over at AWS for bringing this omission to my attention.

    –Mitch

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top