AWS S3 Security Best Practices | Server Security

S3 Buckets and AWS Best Practices

This March Amazon Web Services (AWS) Simple Storage Service, more commonly known as S3, officially turn 15 years old In that time it has become one of the leading object storage service that lets your organization store and retrieve data anywhere on the Internet using a simple web-based interface. It’s a favorite for developers because it lets them perform a variety of functions — like adding metadata to objects, moving and storing data across object storage classes, and running big data analytics, to name just a few. 

An incredibly powerful service, S3 is backed by AWS’s fast and scalable data storage infrastructure. Yet despite its skyrocketing popularity, the service remains a lightning rod for data breaches due to widespread security misconfigurations. 

DevOps Experience

In a recent high-profile example, Expedia’s hotel reservation software provider exposed millions of hotel guest records in a data breach because they were storing sensitive guest data in an unsecured S3 bucket. 

Believe it or not, this is a common mistake that countless organizations are currently making — most just don’t know it yet. Oftentimes, administrators assume that S3 is inherently secure simply because it runs on AWS. While AWS is known for providing top-of-the-line cloud security, there are a number of steps you need to take to prevent breaches from occurring. 

Tips To Secure Your S3 Buckets 

Block Public Access to S3

By default, all new buckets, objects, and access points are not set up for public access. Yet, users can modify policies and permissions to allow public access — meaning sensitive data could potentially be accessed by any user via a URL. Unless you explicitly require anyone on the internet to be able to read or write to your S3 bucket, you should ensure that all buckets are not public. There are a few actions you should take today to ensure your S3 service is secure.

To block public access use the S3 Block Public Access settings to override permissions and prevent accidental or intentional public exposure. These settings let administrators centralize account controls for maximum protection and regardless of how the resources are created.

Identify Bucket Policies that Allow Wildcard IDs

Next you should identify S3 bucket policies that allow a wildcard identity, such as Principal “*” or allows a wildcard action “*” that effectively allows the user to perform any action in S3.

According to AWS, this may include one or more of the following: 

  • A set of Classless Inter-Domain Routings (CIDRs) using aws:SourceIp
  • An AWS User, Principal, or Service Principal
  • aws:SourceArn
  • aws:SourceVpc
  • aws:SourceVpce
  • aws:SourceOwner
  • aws:SourceAccount
  • s3:x-amz-server-side-encryption-aws-kms-key-id
  • aws:userid, outside the pattern “AROLEID:*”
  • s3:DataAccessPointArn

Amazon provides instructions on how to make policies non-public. For an overview, check out this article. Similarly, take note of AWS S3 bucket access control lists (ACLs) that provide read, write, or full-access to “Everyone” or “Any authenticated AWS user.” To be considered non-public, a bucket policy must grant access to fixed values — or values that do not have a wildcard.

Inspect Implementations with Tools

It’s a good idea to use AWS Trusted Advisor to inspect S3 implementations in order to make sure you cover all your bases.  Trusted Advisor is an online tool that offers real-time guidance and support when provisioning resources on AWS. This service can increase security, optimize your infrastructure, and reduce operating costs, among other benefits. However, Trusted Advisor only goes so far and as organizations mature, more advanced tools will likely be required.

AWS also offers real-time monitoring through the s3-bucket-public-read-prohibited and  s3-bucket-public-write-prohibited managed AWS Config Rules. 

Sonrai Dig can also help to remediate your AWS S3 implementation and enforce your  controls as well as ongoing monitoring of your identity and data risks.

Enable Multi-factor Authentication (MFA) Delete

Another way to enhance security is to make a bucket’s versioning configuration MFA Delete-enabled. 

When a bucket is MFA Delete-enabled, a bucket owner must include the ‘x-amz-mfa’ request header in requests to permanently delete an object version or change the bucket’s versioning state.

In addition, requests that include ‘x-amz-mfa’ are required to include HTTPS. A header’s value contains an authentication device’s serial number, authentication code, and a space. Failure to include this information in the request header will result in a failed request. 

Encrypt All Data

All data should be encrypted while in transit (i.e., traveling to and from S3) and while it’s in rest and stored on disks in S3 data centers.  You can easily protect data in S3 using client-side encryption or using Secure Socket Layer/Transport Layer Security (SSL/TLS). 

Use S3 Object Lock

S3 Object Lock is a service that lets you store objects using a write-once read-many (WORM) model. By using an Object Lock, you can prevent an object from being overwritten or deleted for a fixed time period or indefinitely. 

Enable Versioning 

Versioning can help protect your data from user actions and application failure. When bucket versioning is enabled, AWS can store all objects when receiving multiple write requests for the same object simultaneously.

Use Multi-Region Application 

AWS now offers Multi-Region Application Architecture that enables users to create fault-tolerant applications with failover to backup regions. This service uses S3 Cross-Region replication and Amazon DynamoDB Global Tables to asynchronously replicate application data across a primary and secondary AWS region. 

Enforce Least Privilege Access

You can also prevent unauthorized access to S3 by enforcing least privilege access and granting permissions to identities only when specific tasks need to be performed. 

AWS offers several tools for implementing least privilege access including IAM user policies and Permissions Boundaries for IAM Entities, Amazon S3 bucket policies, Amazon S3 access control lists (ACLs), and Service Control Policies.

In addition, it’s a good idea to keep a running tab over all the human and non-human identities that have access to your S3 data. This can be easily achieved using Sonrai Dig.

Where to Start?

Identify and audit all your AWS S3 buckets

Identification of your Cloud assets is a crucial aspect of governance and security. You need to have visibility of all your Amazon S3 resources to assess their security posture and take action on potential areas of weakness.

Amazon provides the Tag Editor to help identify security-sensitive or audit-sensitive resources. You can then use those tags when you need to search for these resources.

You can consider using the AWS S3 inventory to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs. Or you can create resource groups for your Amazon S3 resources. For more information, see What Is AWS Resource Groups?

Implement monitoring using monitoring tools

Monitoring is an important part of maintaining the reliability, security, availability, and performance of AWS services. AWS provides several tools and services to help you monitor Amazon S3 and your other AWS services. For example, you can monitor CloudWatch metrics for Amazon S3, particularly PutRequests, GetRequests, 4xxErrors, and DeleteRequests. 

Enable Amazon S3 server access logging

Server access logging provides detailed records of the requests that are made to a bucket. Server access logs can assist you in security and access audits, help you learn about your customer base.

Use AWS CloudTrail

AWS CloudTrail provides a record of actions taken by an Identity (human or non-human), sych as User, a Role, or an AWS service, in an S3 bucket. You can use information collected by CloudTrail to determine the request that was made to Amazon S3, the IP address from which the request was made, who made the request, when it was made, and additional details. For example, you can identify CloudTrail entries for Put actions that impact data access, in particular PutBucketAcl, PutObjectAcl, PutBucketPolicy, and PutBucketWebsite. When you set up your AWS account, CloudTrail is enabled by default. You can view recent events in the CloudTrail console. To create an ongoing record of activity and events for your Amazon S3 buckets, you can create a trail in the CloudTrail console. 

When you create a trail, you can configure CloudTrail to log data events. Data events are records of resource operations performed on or within a resource. In Amazon S3, data events record object-level API activity for individual buckets. CloudTrail supports a subset of Amazon S3 object-level API operations such as GetObject, DeleteObject, and PutObject. For more information about how CloudTrail works with Amazon S3.  In the Amazon S3 console, you can also configure your S3 buckets to enable object-level logging for CloudTrail.

AWS Config provides a managed rule (cloudtrail-s3-dataevents-enabled) that you can use to confirm that at least one CloudTrail trail is logging data events for your S3 buckets.

Enable AWS Config

Several of the best practices listed above suggest creating AWS Config rules. AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config monitors resource configurations, allowing you to evaluate the recorded configurations against the desired secure configurations. Using AWS Config, you can review changes in configurations and relationships between AWS resources, investigate detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This can help you simplify compliance auditing, security analysis, change management, and operational troubleshooting. 

Sonrai Tracks Data Access Within AWS

The Sonrai Dig service delivers a complete risk model of all identity (human and non-human) and data relationships, activity and movement across cloud accounts, cloud providers, and third-party data stores. Built from the ground up to address fundamental cloud data security and compliance concerns, the solution delivers the following risk control workflow:

  • Discover: Automatically, visualize and map Identity and Data across your clouds
  • Classify: Leverage machine learning to determine data type, importance, and risk
  • Audit: Continuously map permissions, configuration, and access to data
  • Protect: Use behavioral controls to detect and prevent theft

Implementing controls around what has access to data is fundamental to any data security and compliance program. Although each unique cloud provider delivers services and APIs to manage identity and access to data for their stack, they are not standardized across all the stacks available (e.g., Amazon, Google, and Microsoft), do not address third-party data stores, and often require use of low-level tools and APIs. Sonrai Dig resolves this problem through normalized views and control of cloud identity and data access, like your AWS S3 buckets.

Conclusion

AWS S3 is by far one of the most used AWS services available on the market because it is an easily-accessible, inexpensive service for data storage. AWS S3 is also a platform capable of serving important use cases, providing infrastructure solutions for many company technology needs. 

But this widespread usage has led to some problems—mainly, negligently unprotected AWS S3 buckets. Without protection, information stored in an open Amazon S3 bucket can be browsed by scripts and other tools. Since the information in the bucket may be sensitive, this poses a critical security risk. 

Follow the above recommendations and reach out should you need help with governing your AWS S3 buckets.

The post AWS S3 Security Best Practices appeared first on Sonrai Security.

***…

AWS S3 Security Best Practices

Post a Comment

Previous Post Next Post