Are you prepared for the next cloud infrastructure cyber-attack?September 18, 2019
If you think you’ve got your bases covered better than Capital One, then you’re clearly not paying attention. Capital One has one of the best cloud security teams in the industry and has already contributed greatly to the security community by releasing open source tools like Cloud Custodian which can drastically help automate security, governance and compliance.
And yet, here we are, scratching our heads and asking ourselves and others how could Capital One have been so brutally hacked. In most hacking cases, the public is not privy to the details of incidents. However, in this particular case, most of the intrusion and exfiltration information is publicly available in court documents, offering us a rare opportunity to understand what happened and improve our own security operations in the process.
As laid out in the FBI indictment, the hacker exploited a firewall misconfiguration and gained a shell on the EC2 instance. Since the EC2 instance had an attached IAM role, it had access to all the privileges assigned to that role. The hacker took advantage of this over-provisioned role, which in this case included the privilege to discover and exfiltrate personal identifying information for more than 100 million customers.
In a letter to Senator Ron Wyden, AWS’s CISO Stephen Schmitt, states that “even if a customer misconfigures a resource, if the customer properly implements “least privilege policy”, there is relatively little an actor has access to once they are authenticated — significantly diminishing the customer’s risk.” He also goes on to say that AWS “offers guidance and tools to help customers set up the right permission for their resources, which is the next stage of protection after the WAF”.
There is no question security is a top priority for AWS, which is precisely why they are adding new features almost weekly. However, while their IAM is extremely powerful and robust, it is also enormously complex and difficult to master. It is also unrealistic to assume developers deploying software applications have the expertise or resources to create proper IAM policies.
This is even more amplified when tackling multiple AWS accounts with thousands of identities, many of which are non-human (e.g. EC2 instances, API keys, bots, service accounts etc.) with limited visibility and insight into who has authority to perform which action on what cloud resource (e.g. list S3 buckets). Moreover, it is virtually impossible for IAM governance teams to regularly verify and attest that the IAM policies created are doing exactly what they were intended to do.
The question often still asked is who is ultimately responsible for creating the proper IAM policies and keeping them up to date? According to AWS’s “Shared Responsibility” model, “vendors are responsible for security of the cloud; companies are responsible for security in the cloud.” In practice, this means Amazon owns and controls the cloud, and enterprises own and control everything that is done in it — putting the burden solely on the enterprise.
So, what can you do to protect your critical cloud infrastructure when the expertise and tools are in short supply? Here are three recommendations to help get you started:
1. Take an inventory of every machine and human identity (e.g. service accounts, bots, API keys, ec2 instances etc.) that can touch your cloud. Understand what privileges they have and pay special attention to machine identities who require much more regular oversight than human identities do. For example, if a non-human identity suddenly performs an action that they have never performed on a resource that they have never accessed — that could signal a potential problem.
2. Look for privileges that have been granted but have never been used and consider revoking those unused privileges to avoid any unnecessary risk.
In the Capital One case, the attacker used the stolen credentials from the “****WAF-Role” IAM to list S3 buckets to which the EC2 instance had access. A command that on its own is harmless, however, in this case the attacker used it to identify available S3 buckets and combined it with another command on the S3 buckets (‘sync”) that ultimately led to the exfiltration of 106 million personal records to another location.
Remarkably, it was stated in the Justice Department Criminal Filing that neither of the commands had been used in the past. Having the ability to spot and remove, with granular precision, any privileges, especially, one that can be deemed ‘high-risk’ and has never been used (or has not been used in a very long time) could have theoretically eliminated this attack vector without impacting the application’s functionality.
3. Continuously monitor your identities, their actions and accessed resources. The cloud is dynamic — new identities, privileges, role, services and resources are added and deleted on a daily basis. So once you have addressed steps 1 and 2 and have established a baseline risk posture you are comfortable with, you will want to continuously monitor your cloud infrastructure for anomalies and suspicious behavior. This becomes even more critical as machine identities continue to outnumber human identities.
Could the breach have been avoided if the firewall had been configured correctly, the AWS IAM permissions associated with the WAF properly right-sized, the storage (S3 buckets) resources labeled as sensitive and had there been a mechanism in place to alert the team to anomalies or suspicious behavior? Probably yes, but hindsight is always 20/20.
While the recommendations outlined in this blog are by no means a silver bullet, with careful planning and proper tools any enterprise can improve their cloud security risk posture and significantly limit the blast radius of an attempted breach or incident such as the one Capital One experienced.
For a deeper analysis of the technical aspects of the breach, we recommend you read these excellent articles:
Mora Gozani and Thuy Nguyen, CloudKnox Security, Inc.