Show Notes

153 - Web3 Universal XSS, Breaking BitBucket, and WAF Bypasses

Three vulns that were discovered in Netlify’s Next.js lib, which is heavily used across many cryptocurrency sites due to it’s web3 support. With that context in mind, CIA (confidentiality, integrity, availability) is interesting with web3, as integrity is critical; the data coming from a trusted site needs to be trustworthy, as most users won’t go digging through the blockchain to verify a particular address or transaction matches.

1. Open Redirect on “_next/image” via Improper Path Parsing The _next/image handler is used for loading local resources, and takes a url parameter from the user, which is used to send a mock HTTP request. The problem is you can pass an unencoded backslash in the url parameter, which will get processed by Next.js. This is problematic because Next.js servers have a default behavior where users are redirected if they try to access a directory that’s inaccessible (such as \). By passing ?url=/\/\example.com/..., they were able to trigger this behavior and get the user redirected to an arbitrary host.

2. XSS and SSRF on “netlify/ipx” via Improper Host Parsing due to Reliance on Vulnerable “unjs/ufo” Library Similar sort of issue to the first one, where this /_ipx/w_200/X would take X to load a resource. Netlify’s ipx module would allow external resource loading, but only on whitelisted hosts. Bug here is, seems the parsing that unjs/ufo was doing (which Netlify relies on) was bugged in a similar way to the first issue. By passing a whitelisted host, followed by an encoded backslash, followed by a malicious host, Netlify would follow the second attacker-provided URL despite the fact it’s not a whitelisted domain. XSS can be achieved by providing a malicious svg.

3. Universal XSS and SSRF on “netlify-ipx” via Improper Handling of “x-forwarded-proto” Header and Abusable Cache Mechanism The final issue was also in the /_ipx/w_200 route, and they discovered the IPX code could read the x-forwarded-proto header to allow other protocols to be used. If it was specified, they’d simply take that value as-is and use it to build the request URL, like so:

${protocol}://${host}${id.startsWith('/') ? '' : '/'}${id}

By simply providing an x-forwarded-proto header with a malicious URL, the whitelisting can similarly be bypassed to get a resource loaded from an attacker-controlled URL, giving the same impact as the second issue, but without the drawback of needing a whitelisted domain in the payload.

Two argument injections that were found in Bitbucket server, though only one of them was exploitable. The first was in the /rest/api/latest/projects/~USER/repos/repo1/browse endpoint, where an at parameter could be provided. They found that they could smuggle --help through the parameter, though there was no security impact here as there were no useful parameters.

The second was the /rest/api/latest/projects/PROJECTKEY/repos/REPO/archive endpoint, which turned out to be far more useful. This endpoint would take a prefix parameter, which would map to the prefix argument used for the underlying git archive sub-command. Due to the process builder using Java’s native Java_java_lang_ProcessImpl_forkAndExec function for launching the sub-command and it’s reliance on arguments being passed as a char array, by passing encoded null bytes, you could inject arguments as indexes are split by null bytes. Furthermore, the sub-command supports an --exec argument for the path to the git-upload-archive binary. The way this argument is used is it’s essentially just passed into /bin/sh via execve(), so arbitrary command execution is straightforward.

The vulnerability as reported was closed as not a vulnerability, but it did uncover a bug in the Sanitizer API.

In matching elements against the SanitizerConfig it first determines the element’s kind as regular, custom, or unknown. The bug was that any namespaced elements (like those under math or svg namespaces) would be classified as unknown rather than regular and not checked against the baseline. The consequence being that namespaced elements that should be dropped would not be dropped.

The root of the report is also an interesting concept. Using prototype pollution to target a sanitizer configuration. Its not something I’ve seen discussed before so want to highlight it here. It was decided that doing this is just “JavaScript being JavaScript” so not fixed and should be a viable technique going forward.

Just what can be accomplished when webhooks are allowed to access internal services, Cider Security takes a look specifically at abusing GitHub and GitLab webhooks to access internally hosted Jenkin instances.

The core idea is pretty simple, a lot of companies running Jenkins want their source-code management (SCM) service like GitHub or GitLab to be able to kickoff contiuous integration pipelines on their CI service like Jenkins. To enable this they may have a blanket allow statement for the SCM IPS to access Jenkins. This whitelisting may not be limited just to kicking off specific events however. So this posts what could be done if the webhooks could access any of the Jenkins endpoints.

They found a few things out.

  1. GitLab will follow redirects, but GitHub will not. While webhooks will be POSTs, a 302 redirect will strip the body and make a GET request. This can be used to access GET endpoints on Jenkins with GitLab redirects. Though as each request is stateless abusing this is more complicated to access authenticated GET endpoints.
  2. Jenkins will access parameters as part of the URL or the POST body. So while you cannot control much of the webhook body, you can provide all the necessary parameters through the URL.

These allows them to craft three chains against Jenkins:

  1. Authentication brute-force. This could be performed using both GitHub and GitLab webhooks. Effectively just crafting a POST request with the username/password in the URL. Both services allow you to see the response so seeing whether or not the request gets redirected to the Jenkin’s mainpage indicates whether or not the login was successful. While this is theoritically possible, I do question the practicality of it, given the speed of the attempts.
  2. Access to authenticated GET endpoints. This only worked with GitLab webhooks, but the login request also accepted a from parameter. The login request would be redirected to this endpoint upon successful login. By setting this to (for example) a pipeline’s /consoleText endpoint the console output of a pipeline could be accessed.
  3. The last chain is a remote-code execution abusing a 2019 vulnerability that could result in RCE from a single GET request. This vulnerability was discovered and documented by Orange Tsai

Cool research post introducing a few ModSecurity rule bypasses abusing different parser errors in the ModSecurity Code Rule Set. While those specific to ModSecurity are probably patched by now. In some cases the same sorts of parsing issues can occur on the backend.

  1. Content-Type Confusion - This abuses the regex matching used to decide what body parser to use. A content-type header like application/x-www-form-urlencoded;boundary="application/xml" gets interpreted as XML because of the presence of application/xml in the header. Similarly using application/json will trick the parser into parsing as a JSON body. The XML case was most useful as ModSecurity would ignore comments in the XML allowing content to be smuggled through.
  2. multipart/form-data parsing issues. The first issue presented isn’t actually an issue in ModSecurity but is a potential issue in the backend parser and whether or not they parse empty body sections correctly. ModSecurity does, but on some backends (the author calls out PHP here) the parser will continue over the empty body joining it with the next header and body until the next separator. The second issue was ModSecurity treating a single \n as a \r\n. Leading to parsing two (or more) parameters where most backends would only see one.
  3. Charset Confusion (CVE-2022-39955). ModSecurity only looks at the first charset and blocks anything that isn’t utf-8. Using a Content-Type header of application/json;charset=utf-8;charset=utf-7 the author was able to smuggle in UTF-7 content that would be opaque to ModSecurity but decoded properly by the express backend.