A web developer recently learned the hard way what happens when the “shared responsibility” model for securing Amazon Web Services (AWS) falls apart. The CliffsNotes version of the story goes like this:
- Web developer published source code to what he thought was a GitHub private repository hosted on AWS.
- An AWS access key was within the source code.
- A bug in the GitHub extension made the private repository a public one.
- Bots scanned the GitHub repositories for access keys and found them.
- Hackers used the keys to spin up large numbers of EC2 instances to look for bitcoins.
- The web developer got stuck with a $6,500 bill.
In short, this anecdote illustrates the security risks that AWS presents and the role of the customer in exacerbating those risks.
No need to read the fine print
Amazon has made it very clear that securing customer data is a “shared responsibility” between them and the customer. Here’s an excerpt from the AWS Security Best Practices paper stating as much right from the get-go:
The shared responsibility model for infrastructure services, such as Amazon Elastic Compute Cloud (Amazon EC2) . . . specifies that AWS manages the security of the following assets: facilities, physical security of hardware, network infrastructure, and virtualization infrastructure.
. . .[Y]ou as the customer are responsible for the security of the following assets: Amazon Machine Images (AMIs), operating systems, applications, data in transit, data at rest, data stores, credentials, and policies and configuration. Specific services further delineate how responsibilities are shared between you and AWS.
When taking a closer look at our CliffsNotes tale, applications, credentials, and policies and configuration stand out as potential culprits . . . all things that were the customer’s responsibility.
No avoiding bugs
The obvious initial reaction is GitHub should not have released an extension to Visual Studio with a bug in it. But, software is software and bugs happen. That’s just life. That’s why you can never be 100% certain that AWS access keys won’t get stolen.
The author of the article realized it as much, conceding “When working with sensitive information, you can never be too careful, and this is where I assumed something would work a certain way when in fact it didn’t . . . . Security should always be a multi layered approach.”
The skinny on AWS access keys
For the uninitiated, AWS access keys are like gold to hackers. If the bad guys get their hands on them, it’s like turning a kid loose in a candy store with no spending limits. They can fire up EC2 instances, delete customer data, change configurations, and wreak all kinds of havoc (and you get stuck with the bill and having to explain what happened).
In our GitHub example, bots continuously scan GitHub repositories looking for AWS access keys. The downfall of our victim was that he unknowingly published his access keys to a public repository. He thought it was a private repository, but because of the bug, they ended up getting published to a public repository.
Was the GitHub breach preventable?
Even despite this debacle, controls could’ve been implemented to detect when a hacker is trying to inappropriately or maliciously use the keys.
For instance, Imperva offers a very deep solution for securing the AWS management console. Its Imperva Skyfence product prevents account takeover attacks targeting the AWS console by identifying suspicious and anomalous behavior automatically in real-time. It constantly monitors for unusual activities, be it time of day, endpoint device, location, specific activity (like spinning up 120 EC2 instances in a matter of minutes), etc.
If something fishy is detected, you can set in motion any one of a number of remediation options: send an immediate alert, block the specific action or account altogether, or apply risk-based multi-factor authentication (MFA).
Most organizations would see workplace productivity and employee satisfaction plummet if workers had to verify their identities multiple times to access a cloud app. Much more effective would be a workflow that triggered MFA only when something awry is detected. In this way, employee productivity is not materially impacted. But even beyond MFA, alerting or blocking also goes a long way towards limiting the damage from compromised keys, too.
If you remember just one thing . . .
Heed the words of the AWS best practices paper. Not surprisingly, you’ll find similar language in Microsoft’s and other cloud providers’ documentation. The cloud is a different frontier that requires provider and customer to work hand-in-hand to secure business-critical data.
A customer not fulfilling his or her end of the “pact” could end up trending on Twitter for all the wrong reasons or, at a minimum, serve as inspiration for hackers to keep doing what they do best.