Some security experts have described the recent exposure of sensitive information of 198 million Americans — nearly all registered voters — as “the mother load of all leaks.” Deep Root Analytics, the data analytics firm that left its AWS database exposed on the public internet for two weeks, is now facing its first class-action lawsuit. The uproar over the leak will likely continue for a long time.
More than anything, this security incident highlighted the need for organizations to protect their often-overlooked Infrastructure-as-a-Service systems like AWS. The Deep Root Analytics data repository was in an S3 bucket without protected access, accessible to anyone who would navigate to a six-character Amazon subdomain.
Implementing the right security strategy can prevent this kind of leak in the future. It would also help protect data from other threats. Although Amazon Web Services has invested heavily in security, the platform is not impenetrable. For example, AWS has sophisticated capabilities to prevent a denial of service attack, but a large-scale attack could still overwhelm those defenses.
A security strategy also needs to protect against threats from insiders, privileged users and third parties such as vendors and partners. On average, enterprises experience about 11 incidents tied to an insider threat every month, whether from people acting maliciously or negligently. Additionally, a large percentage of breaches can be traced back to a third-party compromise.
Typically, cloud providers use a shared-responsibility model for security. AWS is no exception. Under this model, Amazon takes responsibility for the security “of” the cloud — its infrastructure, including the software, hardware and facilities hosting the services. The company is responsible for protection against intrusion, as well as detecting abuse and fraud.
AWS customers, on the other hand, are responsible for security “in” the cloud. In other words, organizations take responsibility for the security of their content and applications that use AWS and identity and access management. Additionally, they must monitor their own network and firewall configurations, as well as their operating systems.
How to Securely Use the AWS Infrastructure
The best practices below are not all-inclusive, but are a good start to help organizations ensure their AWS environment is configured securely and restricts unauthorized access. These best practices fall into four major categories: configuration, access, security monitoring and user authentication.
- Enable CloudTrail. This should be enabled across all AWS instances so logs can be generated for services, including those that are not region-specific, like CloudFront. CloudTrail’s API call history gives you access to a variety of data, such as resource changes, and allows you to investigate incidents and track compliance. Additionally, enable multi-region logging so you can detect activity in unused regions.
- Enable access logging for S3 buckets. The buckets contain the log data captured by CloudTrail. This will allow you to track access requests, identify attempts of unauthorized access, as well as monitor activity that helps with incident investigations.
- Use multifactor authentication. Especially for deleting CloudTrail buckets, which will make it more challenging for a hacker to delete CloudTrail logs after compromising an account.
- Turn on multifactor authentication for the root account. This should be done as soon as possible, as this account has access to every AWS resource. Don’t use a personal mobile device for the MFA. Use a dedicated device, which will decrease the likelihood of compromise due to a lost device or personnel change.
- Implement and enforce a strict policy for passwords. A strong password, at minimum, should have 14 characters and include at least one each of an upper case letter, lower case letter, number and symbol. Set a 90-day expiration for passwords and ensure the policy doesn’t allow reuse.
- Restrict access to CloudTrail logs. This will protect against the risk of unauthorized access due to compromised users and administrator credentials through a phishing attack.
- Don’t use access keys with root accounts. If you do, it creates a direct vector for compromising the account and gives direct access to all of the AWS services tied to the account.
- Disable inactive accounts. Have accounts expire after 90 days of nonuse, reducing the risk of them being compromised.
- Restrict access to EC2 security groups. This will prevent DoS, brute-force and man-in-the-middle attacks. Additionally, assign your identity and access management (IAM) policies and permissions to specific roles/groups instead of specific users.
- Restrict access to commonly used ports. Such as the following: FTP, CIFS, MongoDB, MSSQL, SMTP, DNS, etc., Restrict access to required entities only.
In addition to ensuring that your organization’s AWS infrastructure has the right policies and restrictions in place, best practices also need to be applied when deploying custom applications in AWS. By laying the groundwork for security, following best practices and avoiding common mistakes, organizations can mitigate the risks of using the public cloud and safeguard their sensitive information.
Sekhar Sarukkai is a co-founder and the chief scientist at Skyhigh Networks, driving future innovations and technologies in cloud security. He brings more than 20 years of experience in enterprise networking, security, and cloud service development.