Defcon Quals, Dead μops, BadAllocs, Wordpress XXE< Back to Episode Overview
This transcript is automatically generated, there will be mistakes
Hello everyone. Welcome to another episode of the day Zero podcast. I'm Specter with me is zi in this episode. We've got some updates on the umn story with the Linux kernel, some various volumes involving Brave and alligators and the return of speculative execution attacks again. So we've got a lot of topics in this episode so we'll just jump right into our first topic here, which is the date on the University of Minnesota and their dust up with the Linux kernel maintainers over faulty, patches, that were made. So yeah, lots of people including the Linux. Maintainers thought it was serve it in service fat, sorry that it was in service of a bad faith effort to intent inject vulnerabilities with their commits paper, which we talked about briefly last week but there was a bit of a dispute around that the researchers kind of denied that saying that was that was a separate thing and it was already finish off. So yeah, we covered it last week it was a little bit hard to cover though because we Do we have all the puzzle pieces? Yeah, well yeah, both umn and Linux kernel maintainers have put out statements so well last week
there was actually a bit more information that then we ended up covering we ended up covering our. We end up talking a lot about how these more recent commit didn't feel like they were actually part of that, hypocrite Khmer of research. Just because of the difference in the I guess style like allowing them to go out to the stable branches and stuff. It was a complete file. Elation of all like the practices they had indicated. They were following. It turns out actually, I believe it was last Saturday so before we did the podcast episode, I just missed it. I guess they had put out a statement or are we had been speculating that like you women would have put out a statement indicating that it really wasn't from that if that were the case turns out they did, they had a statement that indicate that. Yeah, that wasn't part of The hypocrite research that the patches that actually caused all of the issues that kind of triggered. Everything ultimately were made in good faith. They were done improperly. They did indicate that it was from a Automated system so like ice or scanner in this case or Specter I guess we won't be a Source. Enter the research itself they indicate was about finding vulnerabilities introduced through patches so kind of source scanning not exactly. I don't know that research itself hasn't been released yet, so we don't actually have the details on that but those commits came from that paper. Honestly, like I feel like the researcher there, that have caused making those commits. It kind of got gas lift on this one. There were a lot of people saying, oh well, he shouldn't have tried to lie about what they're doing and stuff and turns out like, no, he was apparently being truthful that this wasn't a bad faith commit. But yeah, so that statement did come out, they have actually kind of made it clear. They've also released a full disclosure file which includes information about the actual hypocrite commits. And this includes while the paper only talks about three apparently, they had actually submitted five. One of them was submitted accidentally in the sense of the are just set up like they get send email using that account for God and send a proper patch in under it. So it was including the study of none of the commits actually did hit. The stable, branch has none of them work. Well, one of them was accepted but it the control act like it was a proper patch and have any bugs now whatever the other patches, none of them were accepted so none of them end up hitting anyhow. Although they were rejected for reasons unrelated to the bug that they were introducing or a trying to introduce. Let's say, I feel like this makes the kernels of response seemed like an over reaction to everything. Like, yes, they should have followed the proper protocol for indicating that, hey, these patches are coming from an automated Source. It's problematic that they didn't either maintainer. So had seen that one maintainer asked before all this blue off. Hey, did you find this using Source scanner? Or did you find this manually? And he was told it was a sort of scatter, so they Tied that fact in the maintainer didn't call them out, for not marking it as like being from a source scanner. I mean, I think it's something that could more warrants warning, not like an outright ban of the entire University, and just the Assumption of bad faith. Yeah, so both
the researchers, ours are rather the university and the Linux maintainers kind of put out a apology kind of, I think it's less of an apology from the Linux side. So like on the Linux side, they stated that 42 patches I believe out of a hundred ninety were being pulled and reverted. Greg originally threatened to pull all of them as we covered last week, but after the anger died down, they didn't quite go that far. And there was a little bit of The, you know, there was a bit of an overreaction and some Heat of the Moment, I guess, stuff said by Linux maintainers but they also stated that from their point of view, posting Patches from a static analysis tool without disclosing that it was a tool immediately is like experimentation on non-consenting humans and it's a malicious act which I think is extremely over-exaggerated.
That's the only a stretch. I didn't see that statement or at least it in read it the same way you are but But yeah, that feels like a stretch, like I get them, calling the original research, so human experimentation. It was in fact that sort of experimentation, they were looking to see if the pattern Pat reviewers humans would. Indeed, let the patras through like they're clear questions about what About the form of the research and that ultimately is why this paper did end up getting withdrawn from Oakland 21. There's another link to that, or at least that's part of the reason. They don't want their process to be an example, how this research should be performed. Obviously, a lot of pressure on it, I doubt, then, whatever. Or I imagine the organizers might have considered removing it just because of all the bad press regardless of whether or not the author's had And for what it's worth, I will also mention that there's an apology letter from the authors of that hypocrite research and the authors of that paper are not the same as the authors of the commits, that ended up triggering all these
problems. It basically seems like that classic issue. Painting everyone, with the same brush. The Linux maintainers were just like, oh, this University is done this before. So let's just like, think that those are all made in bad faith. Like, I don't know, it was kind of weird from like limp. The maintainers point of view, they really jumped the gun on it, and I kind of say to this last week, but it has been disappointing to see how many people have just jumped to The Executioner's. Block for these researchers who literally all they like the only thing, In here that they did that was bad. Was they didn't disclose immediately that their patches were the results of static analysis which is like a benign mistake. Like there have been other organizations and entities that have made commits and the carnal but I've done worse for sure. It's just that they got grilled because they got tangentially attached to that other research that happen like this is already
obviously. Yeah, yeah. This like this, going to show up in Like someone commit authors like you know you ever Google is name, this is going to come off for a long time.
Yeah background checks are going to are not going to be fun for for these researchers. Unfortunately that said like hopefully like if they were caught in a background check if they were interviewing for a job or something, they would hopefully get the chance to like explain it but
I don't know. But given how quickly some people jumped on this. Like that's also why I want to recover this topic to kind of make it clear, what happened? Because I do kind of believe that the corrections need to be as publicized. As the original incident course, it won't be, but I do think they should
be. Yeah, it sucks because there's a few, you know, Tech Channel stuff that I watch that I really respect. And they they completely took the Linux maintainer side of the story. They didn't even cover the arguments on the other side or like any other Corrections. So, it's really unfortunate, but yeah, it seems like a lot of people just either don't care or are just not being made aware of it. So you have kind of wanted to bring a little bit of attention to that, but at least there was Somewhat of an acknowledgement of wrongdoing on both sides. There was a little bit by Linux maintainers saying that some of the maintainers didn't agree, you know with the with Greg's original threat and yeah, yeah, In
fairness we're kind of wrapping up the whole, the Linux kernel maintainer Community is like one group and umn as another. But there were maintainers who spoke out against it pretty early on to. So, You know, it's not just everybody as one brush but, you know, speaking in generalities
yeah, we shouldn't try to make it like a battle where there's only two sides. That's fair the way I'm talking about it. That kind of makes it sound that way. But yeah I mean in regards to the maintainers that we've covered like on the post over showing and Greg and the people that were in that thread but yeah that's good to bring up like obviously not all the maintainers are in one bucket so yeah it also seems Greg. He hasn't really commented on this issue. I don't know if you've seen anything that would
contradict know. I have it. I didn't look for much this week though. But last I saw, I had not seen any real comment from Greg, which I mean fine. We'll let the comments come out from. What's the board's name? Whatever the board was that actually made this statement. Linux Foundation. Technical Advisory Board tab. There we
go. Yeah. So I could I kind of want us like a statement from right? Greg or some form of acknowledgement of like that he jumped the gun because I feel like if there was somebody who kind of does deserve a little bit of blame in this situation, it would probably be him, but I don't know, I guess we'll see as time goes on it. If he doesn't address it, then that's totally within this right, but yeah, we'll see. Alright, so we'll move on to our next drop topic. That got some people in a bit of a bad. Mood, I guess which was GitHub who put out a proposal and call for feedback for updating their policy regarding exploits and malware.
And I think it's important that this was a all for comments, not just like Hey we're changing this immediately. Here's what the new policy is, but here's what we are, suggesting is new policy. What's wrong with that? And there was a pretty strong reaction, some what I argue undeservedly, but at the same time, The infosec community tends to jump on any sign of censorship of security tooling and exploits. And write ups are very Territorial and that's well, like, every time of YouTube video gets removed, somebody's, you know, on the horn about it, and it's kind of fair, because this is in a sense like our livelihood, our jobs are based around some, of these disclosure. So, starting to see that being wrist, like, you speak off, like, I, I'm not entirely against tackling, bring it off, whatever it does feel a little bit. Excessive that said. So in this case what happens the main change or at least the main change that I would argue for is that they had changed the line contains or install? This is something that you cannot do to be clear contains or installs in the act of malware or exploits or uses. Our platform for exploit delivery such as part of a command and control system. And they replace that with contains or installs, malware or exploits that are in support of ongoing and active attacks that are causing harm, that I think, is kind of want the central flights. They have a little bit that Define certain terms. There's a few other changes in there. This I think was the big one. And it kind of like in my initial read of it, you kind of read that as so they've removed the necessity of being the act of malware. Now it's like malware exploits that are in support of ongoing and active attacks of what is ongoing and active. Just I mean if a pen testers using it, it's an act of attack. There are some really old vulnerability still being used by a random warm, is that an act of attack and causing harm is really vague. that said, they've come out with a couple comments on the actual PR clarifying that there Point with making these changes here was that they felt the existing terms were actually little bit hostile towards like, the Dual use software, like Metasploit or anything like scanners. Basically, all sorts of exploit software would be considered dual use, because you can use it for legal or illegal purposes. They felt the existing term, so they wanted to narrow the policy and scope with regards to what they would restrict not increase it or at least that's what they say. Their goal was, they've acknowledged the feedback. The many comments have come through, I presume. That's quite means. They'll will see yet another PR after this. That probably this one probably won't go forward. As is Is sounds like they've made more or less the proper response here. I think it's fair to talk about it, her pointed out, but not much really worried about. I think I think their intent here is good to make it more clear what you can or can't do and narrow the actual scope from some, the it is, somewhat vague in the original terminology
also, So I do think it's weird how it seems like the implementation completely flies in the face of the Bowls because like I read the blog post they made and they stated like there. They wanted to focus on addressing the gray area of allowing code in support of security research. But this allowing the actively harmful content and trying to clear up some of the ambiguity around exploits and malware and whatnot. But this does the opposite. I actually think The way they had it worded before was a lot more clear than the way they have it worded in this piece
are so, did I originally at first? I was singing. Yeah. That it seems like actually have in my notes a little calm there. That seems like. Well it's one thing for them to say they want to protect from the overly broad Contour language but they're change seems to become more broad by removing the qualifiers. But actually looking at the change, like they basically move the any Act of malware like a, they remove that in a sense but they push it over and included with the exploits malware or exploits that are in support of the act of attacks. You could kind of read that as being more vague but I could understand why somebody was maybe writing that might think of it as actually being more specific. Now, they're saying the act of malware and it has to be causing harm. Now Harmons up being vague, and there's questions about that, but I don't know. I can understand where somebody's coming from when they read this and think that it's actually becoming a little bit more specific and I could understand why obviously we read it as being less specific. I mean that's and this is kind of the point of getting
feedback. Yeah, it kind of depends on which side of the fence you land on, I guess. Yeah. I mean like you said, I think it's good that they didn't just make this change and push it through like a lot of other platforms. Do I like that? They have the call for feedback and like, I was scrolling through the comments and there were, there were some really angry people.
But there are a lot of comments and a lot of angry comments. And yeah, I think they the process.
The the back and forth though that came from like on the pr was, was generally positive. Like they're, you know, outside of their kind of silly comments that people just ignore it. I guess. I think there was some good back and forth. So yeah, I mean it's definitely I think people were a little bit too angry and and again, you know, that kind of sound like a broken record, but they kind of jumped the gun as soon Bad Case assuming. Yeah, exactly. Jump. The gun on go. Going after GitHub and saying, oh, they're coming after us again. It's just another platform going after us, but no, like I do think they have a reason and a good reason for wanting to address these issues. I just don't think that these, this is a particular Lee, good implementation, which like you said, that's the whole point of call for feedback. So, yeah, hopefully, we see a better implementation at some point, and if we do, we'll probably cover it on the podcast and one form or another. All right. So Defcon Qualls happened this weekend, keeping on the train of things. So people were mad about. So, yeah, so order of the Overflow. I think this is their third year organizing Defcon CTF. Defcon CTF will be a little bit more interesting this year than last year, because last year, it was like totally online. Even for the finals, this year, Defcon is going to a hybrid model, which we wanted to talk about on the podcast, but we totally forgot. But at the time I think Defcon is going to be in person though this year, as well as online. So, yeah, cpf will have the, the in-person component I believe. I don't, I don't know if they said anything definitive on that. Yeah, I'm not
sure how would these CTF, like, the main game is actually going to get organized this
year. It'll be interesting to see because I could see them not wanting to force teams to to go there where travels going to be difficult if you're outside of the US, and you're not vaccinated, you can't go basically, so Yeah, I mean well even if you're in the US and the facts stated, you can't go. They have vaccinations as a requirement but anyway. Like it'll be interesting to see how the finals play out. But yeah, Qualls happened this weekend. There were there were some complaints around the challenges. The biggest thing that I think I saw going around was the fact that there were no web challenges this year. It was basically all reversing ponen crypto, I believe. So you know, people who like their web towels, we're kind of rub the wrong way. The one thing that comment that somebody made that I wasn't totally sure on was did legit. BS do web channels or was that kind of something that order of the Overflow themselves kind of introduced and their previous years? I wasn't really around for the legit BS era. So I don't really
know. Yes. And no, it's kind of the answer on that one generally speaking. No legit bias it and do a lot of web challenges as far. As I remember and tell, I do have memory of one web challenge that ended up being a colonel exploit. There's an RC so it got tossed in as web but then there was an actual like Colonel exploit to get the flag so you know you could call that web but not really the RC I remember being pretty easy that said when I was trying to find which Defcon qual that actually was, I couldn't figure it out. So it might have been During the DD Tech era. Also saw like, oh nine to thirteen, twelve, I don't remember exactly what you're legit BS took over. Might have been in that era to that said as far as I could tell there's only really one year. We're legit BS. Did some web challenges to any significant degree and they were kind of web / crypto challenges for the most part. Which was I believe, 2013 had the three dub category, which is your www category. So generally speaking at def con has been our Devcon qualifiers has been more binary Focus, which is fair because Defcon main game is binary. Folks. In this is the qualifiers for the main game. But yeah, it does seem like order of the Overflow in terms of having some really quality web challenges for def con Qualls. It is kind of something they
started. Okay, so even if you put the web stuff aside though, there were still some unhappy campers around the even the binary challenges. And that mostly comes down to the fact that it falls into that category of CDF challenge where it's made difficult. But not in an interesting way it's just like made annoying just to be annoying. And example of that would be like I looked at a few of the write-ups the challenges. So to make it clear, I didn't participate. Pate in the qualifiers this year and you didn't either zi. Also
stayed. I normally, I at least kind of look at a couple challenges this year, like I just kind of lost interest. Like I it's not that I have an issue really, with order of the Overflow, but playing their challenges in the past years, I just personally, haven't really enjoyed their challenges and kind of their challenges. I'm back. Could definitely Beyond just me. I just haven't really enjoyed this. Challenges. So I haven't been playing. And this year, I'm just like every year just going to been less and less this year. I just didn't
play. Yeah. So I looked at some of the write-ups for some of the challenges this year and it was interesting because looking at the pone challenge right up to expect the right up to be about you know a little bit about finding the issue but also exploiting the issue and getting the flag. A lot of the time pretty much the like 80% of the right up was reversing and that like and I say a lot of the time as if I looked at all the challenges at the time when I looked at the right up there was only I think Four challenges up. But yeah. And like two or three of them, like, most of the write-up was reversing and that's because for their pone challenges they put out like, unstriped or sorry stripped statically linked binaries out with no source code or anything, which no source code is fairly common. I think for pone challenges but like, intentionally statically, linking it, and then stripping it as well making it. So you have to basically reverse the binary to even Find the issue unless you just like throw it in the NFL or something.
But I mean, little first thing isn't like I mean it is part of the game, but when it comes to challenges Ein, just tossing that on there. It's kind of like an artificial challenge. The challenge isn't hard. The challenge isn't hard because of ning to do this reverse engineering, it just TDS. The challenge itself is still kind of folks, on whatever the exploit is, you know, being a poem challenge pone being the primary Park. The reversing is just like this extra step that is kind of unnecessary to. The actual exploit, an artificially makes it harder, like, I've kind of with challenges that I've done. I kind of follow the idea of making a challenge. And this is really easy to misunderstand but keeping challenges as easy as possible, but no easier and that might come off as being like, okay, just make easy challenges. That's not really what I mean, I just mean, like, don't add any extra fluff that artificially, make something more difficult. If the core of the challenge isn't difficult, that's just not a difficult Challenge and you, as long as the core of the challenges of being obscure, Then it's kind of okay. Oh no, it's our to, I haven't really set out to a explain My Philosophy, but
I guess for trying to summarize it, it would just be like, keep your challenge centralized to like the Crux of it. Like, don't just try to throw a bunch of barriers to make it. So you have to go through layers to even get to the issue. That's just it just makes your challenge annoying, not fun. So I think that was kind of the problem that people had with these challenges. That it was, it was intentionally made more difficult than it needed to be. Um like the actual vulnerabilities. Like when I read the write-ups and got to the vulnerabilities, it was pretty straightforward. It was like, oh yeah, it gave me an arbitrary file right now. I just use that to like, you know, do stuff like a wasn't really complex and the exploitation angle. It was just a lot of work to get there, and that seemed to be like a Common Thread that there were people who put like their timelines. And they were, like, I spent all of Friday and Saturday, reverse engineering, this and Nedra, and then on Sunday, I was able to exploit it like people don't want to do that in a 48 hour time CTF. And I think that and it makes it basically makes Defcon challenges like Manpower oriented. It's not down to interesting design. It's just how many people can you throw at the problem? That's basically what it turns it into and when you have these massive teams that are competing, I mean you can't really compete against them. When you make the problems oriented like that, it's like if you don't have the number of team members, you are screwed. You're not gonna win. So That's why I think people had a lot of problems with the challenges and that that seems to be kind of what the previous years have been dead to me with order, the Overflow as well. Like, I played with a couple of the challenges last year and I wasn't a big fan either.
Just like I will say, I do think. Oh has had some really cool ideas in their challenges. I'm like, I don't fault him on like South the crazy ideas like going multi arch for want to follow my. So, I think it was last year, year before, like, they've had some really cool ideas for the challenges. It's just the implementation tends to make these make the Crux of the challenge that they're designing. This artificial. Hindrance or hurdle to get over rather than actually making an enjoyable part of the challenge. So I don't know. I mean it's a difference in design like said I just personally haven't really enjoyed oh as the organizers. It's nothing really against them. They just haven't been for me. So I haven't played, I can't speak about this year's look said, but I've seen a lot of comments like that, the other comment I've seen in response. There is you know how old school players, you know, were fine with all the pone and you know, it just people complaining now and I mean, CTF and has become. Very different than it was even like five well 10 years ago. Like just the difficulty of the challenges, the time involved, the difficulty of exploitation has risen quite a bit since then, too. but in order to make challenges that are kind of hard for everybody, like, Wells needs to be hard. They can't just have like a bunch of easy challenges. So I get the idea that this is a qualifier for one of the biggest or most important games in CTF, like, Defcon main game, it's got to be hard. It was so my issue isn't really, it hasn't really been with the difficulty of the challenges, but the actual design, I guess. And I feel like other people are kind of echoing that same statement just they're hard but not in the fun
way. Yeah, it seems like like this is anecdotal because obviously, I don't talk to every CTF player they participated but it seemed like overall the the sentiment was pretty negative regarding what you were saying. Yeah. So order of the Overflow on their Twitter put out this tweet that rub some people the wrong way. They had this old-school CTF as a modern CPA offers meme where was like OMG to gessie bad labels. I only know web crypto, like just kind of making fun of like I guess newer, CT efforts, that were complaining. There were people that were really angry in the Tweet saying this this is gatekeeping and it's anti beginner
which Falls are not beginner. CTF. I'll say that simply like you know that that's not the issue. I do like this, okay? Boomers response though.
So I do think it's kind of a dumb Meme and it, it, it's like, order of the Overflow is just they don't want to deal with the criticism. They're like, oh, whatever. We feel like we're the best and these people are just complaining for no reason. So let's just me Mom. I feel like they're kind of avoiding legitimate criticism with with the way they've responded that sent like with the anti beginner stuff. I mean, yeah, it Defcon is not meant to be a beginner CTF and I kind of like the ciphertext Tweet reply from from binary ninja. He said, like there's plenty of other CTS that are tailored specifically at a wider variety of skills or skill levels. Not every CTF has to be everything to everyone. And I think that last sentence with not, every CTF has to be everything to everyone is a good statement to Echo because A lot of people like it's easy to get into the mindset of thinking, a CTS should be tailored towards you. Like, oh, I didn't invite the CTF just sucks them. Look, it's like it's easy to kind of fall into that trap. But, yeah, I mean, there's lots of CTS out there, unfortunately. And I don't want to go too deep into this discussion because we're already 30 minutes after the show. We'll probably do a separate discussion video on this. I do think CTS and general are kind of heading in a direction that I personally don't. Like, A lot of the challenges are going towards. Like here's a, here's a product that we reintroduce an end day in to go and exploit it, like it's like the creativity seems to be kind of lost in the CTF space in general.
But do you see that happening? Any Defcon one that? I mean I've seen that with some butt. I know that's kind of thing that
I found specifically had an end day one but I could be wrong on that. I think that's absurd.
That's something that's been around for quite a while. It's just been the type of CTF. I just don't bother
with Yeah, but it just seems like those kinds of I guess, filler challenges are being thrown in more and more. It's like, we can't make really hard challenges for these team. So let's just take a real world bug in throw them at them. It's kind of like the lazy approach in my mind, but
fair, I just had really seen that. I so to be clear, like I already said, I haven't I didn't play this year and I've kind of been stepping down out of CTF. Since like 2015, just not really playing as much just casual, if I I
do. Yeah, I do think like trying to play CTF competitively like trying to place first or something. If you're an individual I think it is kind of a Fool's errand at this point and
CTF depends on the season that's not on in the competitive one so no and LM on PC and chap mentions imagine think a professional CTF should be beginner-friendly like yeah I mean when it comes to Defcon calls like it's got to be a hard But really, with any of the qualifiers ETFs they've got to be difficult and that's completely Fair. Yeah,
it's a gate the finals. I
mean it is it is literal
gatekeeping. Yeah, so yeah. I mean I think we will do a discussion video around like CTF sand and I guess our thoughts on them but I won't go too deep into that. And in this discussion overall I just I wish that order of the Overflow is a little bit more receptive to the feedback instead of just trying to like fight against it with these memes and stuff. Because I do think like, like you said, I think they have some cool ideas. I just wish the implementation was a little bit. Was a little bit more thought out. I got some and more interesting than just, let's just make this as hard as possible by throwing all these annoying gotchas in there. Like, because they put red herrings and challenges a lot too. So it's, it's that's another thing that I kind of thought, like, but CTS. But anyway, yeah, Defcon calls did happen for those wondering about the results PPP placed first, not really much of a surprise there. I mean, PPP is pretty much One of the best teams of the world. So but yeah, if you want to check out the results though, they have the scoreboard up. There's also the write-ups on on CTF, time and whatnot, there should be more on there now as time goes on but yeah. So feel free to check those out if you're interested but yeah, overall didn't seem like the the vibe was too positive on the on the course this year. All right, so up. Next, we have bad Alec which was a quick post by nsrc about vulnerabilities and memory, alligators. I kind of put vulnerabilities in quotes a little bit because I'm not totally sure if I agree with that phraseology, but we'll get into that, I
guess. Yeah, we had a bit of a discussion about it in the Discord core issue. I mean, I'm on the side of this being vulnerability and asset. Well, so they have 25 seavey's for it. So, 25 separate vulnerabilities, but the Thank here. And why they've called a bad, a lock. Is that? Effectively, what they found is, various allocators in particular ones used by widely used by real-time, operating systems like embedded software, usually, like constrained environment. In their allocators, they would have issues in calculating the size of certain allocations leading to integer overflows with in the allocators themselves. So I think one of the more common examples was in the colic, implementations trying to, you would provide both a number of elements in a size of each element. And then it would allocate you a contiguous memory block that could contain that many elements of that. Sighs. So classic way that it could implement the call lock interface is Multiplied. You know, number of elements, X size of each element and allocate that space obvious issue there. Being an integer, overflow is possible when you do that multiplication, and if it doesn't check on it, it'll actually allocate a smaller space then what you asked for resulting in a heap overflow. So like when allocator does that, that's definitely a vulnerability like the allocators in the wrong. That is a bug that Is an issue there. The actual case in, which it's going to be exploitable the program more or less has to kind of have some issues also, to be trying to make such a giant allocation, but I put the blame on the allocator for not returning, a null or returning an air case of a can't actually service the application, rather than having the integer overflow in the multiplication or other
calculation. Yeah. So I think the overflowed you mentioned is an issue and I guess I kind of said earlier that I wasn't totally sure about vulnerability. Okay. I'll say like, it's fine to call it a vulnerability because it is an issue that said my problem was more along the lines of what you were talking about. Your application has to be either doing insanely large allocations, or letting users past their An insane arbitrary values to it at that point. I feel like your code is already on some Shaky Ground that to me in and of itself is an issue. Just letting a user allocate as much memory as they want like that. Now, maybe it might not lead to memory corruption if this issue wasn't in the alligator but it is like you have to have a very specific circumstance for this issue, to actually impact you. And the reason that I take a bit of an issue with it is MSRP, See, really tried to play this issue up, you know, they did the name vulnerability thing, calling it bad Alec. And they did like an advisory for iot devices and whatever. It's like, if you were going to be affected by this issue, you probably already have bigger problems with your code base. I mean, it's iot. So you already did have bigger problems, but yeah, I think you'll like, nsrc was really trying to play this up a lot compared to how impactful actually is.
Yeah, the other thing is just calling it bad. L look like part of the reason for you. Nickname vulnerability is to be able to Kai talk about the issue to add more clarity by grouping them all together with a memorable name. This is just like in correct, size, calculations, or integer overflows. Like, we have a name for this already. It's not new.
Yeah, we're renaming integer overflows so yeah, I mean I feel like this was played up a bit but it's fair to call out the issues. I believe was the were the issues, they called out and you see lipsey out. Oh no, I think that's another topic. I'm kind of conflating with this one. They, yeah, because they talk about multiple
galaxies, they do. Yeah. They do talk about least one exists. Some existing in like a lib C, but not necessarily like G Lipsy.
Yeah, I was just trying to see if I can find exactly which lips either were talking about but I couldn't. We do have a topic later on the touches on you see Lindsey.
So I was kind of, that's this one, you see. Lipsey - NG versions prior to one point
0.36. Okay. There is something else later on with them too, but, um, yeah, okay, fair enough. So that, that kind of validates that a little bit. But anyway, yeah, overall it's a ferry, should call out but it was played up a little bit so but And that will just got to get the show on in the for the
episode shout out this this was Microsoft section 52. I can find absolutely nothing about section 52. Besides this report and they come up on like a one other page that mentions
them. It's super top secret.
yeah. It is a very ominous name section 52, and it's like, Area 52 or area. If you want.
Sorry, Palo Alto networks, they have uniform. Here was one of their threat intelligence
groups. Yeah it's like a it's like a military kind of term you know, makes makes it sound more scary. Yeah you never talk about section 52. Exactly. All right, so we'll move into another one of our big topics which is the return of speculative execution attacks. So unlike the last one we covered this one affects both AMD and Intel where the last one we covered a few episodes. Back was only affecting, AMD, even arm this time around actually, is mentioned in the abstract. The three attacks are also a lot more impactful than the recent one we covered with AMD because one of them is cross domain and crosses the user land on Colonel boundary. Another one is cross, smt thread. And the third one is transient execution that can leak Secrets before the instruction is even dispatched to execution which means that the previous speculative execution attacks or the previous speculative execution mitigations, like elephants are are basically useless here, so ya know. There's a lot of stuff to unpack their
the cross. Smt one I believe is only AMD the so the vulnerability itself or the timing side Channel exists within the micro-operation cache and that is not shared across like hyper threads on Intel. So that what the cross? Smt thread, one, only impacts AMD or some AMD processors. Not all of them. But kind of jumping forward onto what the actual issue is. Like, Specter was just alluding to it happens or it gets around things like elephants and existing mitigations. So that I think is kind of the novel thing here. The important thing, but the core of the issue like mentioned this micro off, or you all cash. Oh, Is basically, when you win the processor goes to decode instruction. So reads in the bites on processor cycle, read them in factors, however, many that needs to decode them, those what they call macro Ops into micro wasps. So, that is the microcode that actually kind of implement seats. Each instruction, that is kind of an expensive process. So there exists the cash that instead of fetching and decoding, it can just stream them. I would have the cash instead, so that creates the observable timing side Channel, where depending on, which instructions are executed, you know, they may be fetch off the cash or they might be decoded. Obviously, the cash is going to be much quicker to actually perform. What's interesting about this one is because it's in that instruction cache or in like that decoding process. It's before any of the instructions are executed, and that means that like something like an elephant switch control speculation across that boundary like the actual execution of it, it's happening before it executes the instruction or sues the instruction. So what happens? It, basically just happens to late. So the existing mitigations are happening too late to actually have an impact on this one. So I thought that's a really neat area. The paper ends up going into like details about the actual replacement strategy and use for the micro op cash. Now, Jaco recently, XU code will be in there. They talk about their own little displacement attack to determine which you office were recently. Executed, kind of give you a lot more details, but the gist of it is being able to lie, kind of a secret dependent jump or Branch, so you can know if it took a branch or not. And that's really kind of the gist of
it. Yeah, so I'll kind of put the usual disclaimer when we talk about this topic here as well, just saying that we can't go super deep into the technical details because the microcode level and like CPU speculation is not really Z's or my field. We don't go that low level. So yeah, I mean this is kind of like a our understanding from reading the paper. Obviously when you get to speculation and these kinds of performance It's optimizations with, like a micro app cache, it does get pretty complex at that level. So we are kind of hand waving a little bit there. By the paper does go into the deeper details on that cash and how it works. And the differences between like the traditional instruction and data caches and a microbe cash
yet. The paper gives a lot of the details and on, like, what? Like the background and on exploiting it. But like at a high level, I mean, you can kind of understand That's a cash. And there's the timing side Channel, John Masters. All-sky did a good overview of it. I've included the
link. Oh, nice. Okay, I never saw this. So Just taking a quick glance over.
Yeah you'll learn everything you need while we're on the
podcast. Thank you. No it doesn't look like it's very low resource. Yeah I'm looking at Nasa finding like some of the terms and yeah this definitely looks like it's worth a read just taking a quick glance at the seeing what it's covering looks like it could be useful for getting you the background and a less dry manner. I guess. But yeah, I mean what's important for this attack is the fact that it does this cache works He's in the cache is previously used by like previous speculative execution attacks. You know those previous medications can't really defend against it which means that, in order to fix this issue, there's probably going to be more performance, detrimental, patches that are going to be needed. And I did see some interesting discussion on forums, like Hacker News with some very defeatist attitude stating, like, if you keep looking for these types of issues, you'll find them and saying, the only way to stop it is to get rid of Regulation entirely which they don't want because that would roll back years of performance improvements. So it seems like there, there was a common sentiment in that thread around the we want to keep speculation and keep the good performance. Even if these issues just keep kind of popping up out of the, out of the woodwork, I'm a bit curious on your thoughts. See there on your thoughts between like the security boundary and the performance or the security performance trade-off is
Yeah. Like it's definitely a risk. I don't think that's was like world ending like the press release, just calls the like defenseless as like the headlining work. That feels a little bit. It feels like a little bit of an exaggeration in terms of this actual attack, like, I don't think this is world ending type of issue. Steps could be taken to mitigate it. People can decide to turn that off or on. Load of chat filled the cat ask. Can you get the jet to produce invalid instructions though?
don't think so. I mean, apart from a bug you probably shouldn't be able to be up to
get memory corruption in the jet engine or something. And that's
true. Another question from rude, email is I wonder if y some can have some again, you know, why some because it's being interpreted the actual instructions being executed without a bug, you shouldn't have invalid instructions coming out of the why some chain
either. Yeah. Last time is kind of misleading in that in that way I guess into like it's easy to think it's actually running it like the assembly level when it's was being interpreted. Yeah, so I mean I kind of get the idea of not wanting to enter performance and it will be interesting to see what the patch ends up looking like for these issues when they're introduced into like the colonel and stuff, it'll be interesting to see the performance overhead because some of the
initial Stink thing here sorry to just interrupt you but I do it. Forget about that we don't really they didn't disclose their timeline. For the disclosure seems like they made this disclosure before any sort of patch was
out. Yeah, I haven't seen any like references or links to patches for this issue. That doesn't mean they don't exist but it's just maybe they're they're kind of hidden or haven't been really publicized very much
but yeah well that gets mentioned is that one of the authors has a close connection with Intel because he was an intern there and that's all that's really mentioned about any sort of disclosure so I don't know if there's much of any disclosure that happened like beforehand or if they just kind of drop this, Obviously it's going to be present at a pate at a conference in June is CA which I forget what that stands for.
anyway, only took us an hour to get
here. Yeah, yeah. Usually, we're here like 30 minutes ago but today's a really packed episode. So our first exploit topic is Brave on Android and the vulnerability that can allow remote cookie stealing through a malicious web page. The issue basically comes down to the content URI and that can be used to download the cookie jar from its own private data storage into the publicly accessible downloads folder. So
yeah that's not kind of finds a foothold. The issue there being that brave itself configures itself so that its root directory for the fileprovider is its actual route. So you can access both public and private files, including cookie file.
Yeah, so that's the main like thing of the issue is that you can leak across that boundary. So they reported that the blog post does get a little bit more interesting when they try to see if they can hit it from a remote context because obviously like if you can pull the cookie jar file into the public downloads folder. That's great. But if you don't have access to the users private that download folder. Sorry, not private-public download folder through like a malicious application. Then what good does that really do you? So they actually found a way to be able to Abuse the same functionality to get an HTML file downloaded, which would then use an iframe in a second stage through another tab to extract and send the cookie file to an attacker controlled server. So I thought the converting it into a remote attack. Aspect was pretty interesting but basically the issue just boils down to allowing that URI Handler to access the private file store and being able to cross that boundary of private and public storage on the device. This issue would require user interaction, so it's not the zero click since they would need the, you would need to go to an attacker controlled web page to start off the attack, but I mean that's, that's not a huge ask. So, it's still a decently impactful issue, which Brave agreed with, as they read that as a critical issue and paid a $500 bounty. In terms of timeline, the initial bugs were reported in the middle of May. Last year. Vixx's were deployed in the middle of June, but there was apparently a regression or something. The issue was then finally re-fixed on October 23rd. I thought that was kind of interesting because it's not often we see regressions mentioned and exploit timelines. But yeah, relatively quick and straightforward issue but it's kind of neat. It's not the kind of exploit we typically cover on the show. So yeah just wanted to
I thought the actual like exploit strategy was kind of cool on how they took advantage of it turned into a remote
issue. Yeah, for sure. So we have our weekly oauth based issue that allows takeover or Facebook accounts through crowd tangle, which is a social media monitoring platform for Brands to
use and they do use oauth. It's not really an oauth issue though.
Yeah, so the issue itself is a little bit weird and it does involve. It's basically an issue in the control flow for the oauth. So and it basically comes down to being able to use redirects in order to steal limited access tokens from a Facebook user. So the oauth callback for crowd tangle will do this thing where it will set a cookie that contains a redirect URL on an authenticated page, then follow that. So, with the Access token to a potential attack controlled redirect URL through cookies. Sorry. Go ahead
and stop. So it is a tactical to the redirect URL cookie is set apart from the actual call back. So just when you access any sort of page, that requires a login on apps that crowd tangle.com, which is a first-party Facebook application, any page there, that requires a login, it's going to set that cookie and then redirect you to the off page. So you can just log in the thing is, Facebook call back. So when you log in using Facebook there's it will use that same cookie when every direct so Facebook, you go through the oauth process. Facebook send zi back to this / Facebook / off page and then that page will redirect you based on the cookie. The cookie is value needs to be still like on apps that crowd Tangled. I'll calm you just control kind of the end point that it goes to. So then there's an opened redirect in. They don't really explain how this opens. Open redirect Works. They just have this custom page / a / x /x / hash. And that hash Park can be some sort of encoded string. That represents another URL. That it'll send you off to.
Yeah, they kind of keep it vague on the open
redirect. Yeah, it's a little bit fake there, but it's enough information. You get an open redirect, their so Facebook, redirects you back, and then you go to that custom page within then takes to the attacker page. So by starting off a request first. That just gets that cookie set to go to your custom page that requires a login and then you get the user to go to Facebook to login or the Facebook, go through the Facebook authentication, so of redirects them to facebook slash auth, which redirects them based on the cookie, which redirects them to the custom page, which redirects them based on the hash to the attacker page. And they just set the response type equals token to actually get the token in the URL and sending it along through all of these pages.
Yeah. So you do get the that limited access token through the fragment part of the URL, which is a one way, they fix this. They make it. So it's no longer a fragment part. It's sent as a parameter, I believe if I remember right from how they did the fix. But yeah, I mean that access token was pretty limited on its own. It couldn't give you compromise of the attack of the account. Sorry. So it wasn't like, a full attack on just the one issue.
Yeah, it was free. Read only to the graphql database. So basically you could read everything but you couldn't actually modify much of anything.
Yeah. They were able to use it as a foothold though to pull the see surf token used in the off flow through the graph ql queries. And the way they do that is they send a device. Login request with some first party app to get a user code, then they take that user code and combine it with the stolen, access token at the victim. Send a request with that, over to our graph ql and that returns a seat. If nonce that's attached to that user code. So once they have both the user code and the Caesar of token, they can send another request to the graph device, login, status and point to receive a first party access token, which is much more privileged and can then be used to take over the account. So, it's a bit of a complex attack and it involves multiple stages. The reasoning behind those stages though is that it allows account takeover, without any user interaction, which is kind of what Facebook wires in order to acknowledge it. As an issue and payout for it. So that's why they went through all those steps on Facebook side. Multiple steps were taken to fix the attack for one thing, the open redirect was fixed and the crowd tangle access tokens. Can't be used to access the graph ql and point directly anymore. So they can't like perform that upgrade attack for upgrading to the the better access token. They also switch the call back so that the token was sent as a parameter instead of the fragment like I mentioned earlier. So yeah. X4, taken all in all, he got a 30k bounty from Facebook. So not bad at all. That's that's pretty nice payout. And yeah, it's one of those issues where it's not a straightforward. You noticed it was missing an authentication check and I did this, and it was kind of a kind of a complex flow issue. It does suck that we didn't get any details on the open redirect. But for the purposes of this article, like that's not really totally relevant. They also might not have been able to talk about it for, Whatever reason.
So I don't blame them but it would have been nice to see. Might not be all that interesting. They mentioned the hash can be manipulated. So it's probably just like URL encoding type deal. It's probably not all that
interesting. Yeah, so yeah, that's fair enough. All right, we'll move on to Wordpress. So sonar Source, put out a blog post about a WordPress. 5.7 XML external entity issue that they found zi o. Let you take this one away cuz I think you'll be able to speak on this one. Better than I can because you have more experience with XS C
XE for sure. In the xxe barely matters with this one. Basically xxe like you can get Theory, you can get like code execution from it, s Sr F, external XML entities. The vulnerability here is pretty stupid, kind of funny. Oh, in versions of PHP. Prior to eight, the XML loader, they would call this function. Lib XML, disabled entity loader. True. What this would do is it would disable XML entities are well, external XML entities? I'm not sure if it would disable all entities or not, but they would call that it would prevent the attack on a, a point I want above, they wouldn't call that function because it's been deprecated. It's deprecated. Because by default, it won't do the entities it bought, when they actually make the call to simple XML load string, they have this flag at it. They're the lib XML no end, which we live XML. No end to these. What that means is that it's going to enable entities and it will replace all entities. So the output will have no entities remaining in it. That's kind of the name. So what that ends up doing is it will enable remote or external entities. Uh-huh. Basically, I so on versions higher than PHP because it calls that function are because it calls with that flag. Egg even though the default is not to have entities enabled, it will go ahead and include them very well go ahead and resolve them on vergence greater than eight because you never called anything to tell it otherwise. So pretty stupid issue just not being aware of the side effects of the flags.
Yeah, which I do like seeing those kinds of issues because they're kind of fun, just talking about, like it's easy for a developer to not fully read how what a function does and the implications of it. And I've definitely been bit by that before too. So it's, it's one of those ones where it's very forgivable, I guess, even though it is kind of a silly issue. But yeah, so that's the first of two sonar posts. The next one is an exciting blog post around PHP, which talks about Vulnerability, and composure, which anyone who works with PHP, a lot probably knows about because it's very popular, dependency manager. And this is also an interesting Target for supply chain attacks as the title of the article suggests. Now, in order for composer to be compatible with the various Version Control software, the developers might be using, they contain drivers for supported Version Control software like get SVN Etc. And those drivers contain a supports routine which will take a URL. And it'll try to run a command, using the specified Version Control software to check, if it's like a valid repo. And like, if that if that driver should be used, the problem is though that URL that gets passed across is injected into a command that gets ran, but it's not properly, sanitized against argument injection. So if an attacker passes like a Double Dash for the start of the string, they can inject an arbitrary option or argument in to get or whatever other version control. Troll is in play. So the first place they highlight this is in to get driver in the supports routine but this issue is present in other drivers as well such as SVN and HG or Mercury which I believe is the one they actually take to getting a remote shell. So this is a cool bug that exists in multiple places. At first, the impact seems kind of limited because getting control over that URL. As an attacker is kind of a high ask because you would, you would already kind of need. Access to what's getting passed in, but what if this could be used to attack? Something that relies on composer such as packages? Well, that's where the in the issue gets a lot more interesting because they ended up discovering that through Mercurial. They were able to find an option they can inject which was the config which you could use to set up like she'll aliases one month
correction there. I believe actually composer calls packages itself so if You had a package package, s. And then look it up composer. I had initially wrote a summary that I gave you Specter. That was wrong. I'm not sure if it's all that but
no, sorry the way, but it's not just your summary the way I read it was that it was the other query
packages to obtain its metadata. And then, that metadata is what gets fed into like SVN or whatever. So that's where it gets the URL where you can either download like the distribution. So I can already pre-built archive or you can download the source, it'll get those URLs from packages. So then you can add your malicious repo into packages. And get your code execution through
there. Okay, so you get the vector through packages but then you can also compromise the packages server. That's okay. That's a little bit weird the way that it's worded then because
I'm not sure about the X-Plane packages server actually. Well the reason I say that is I'm doing.
Yeah. Yeah they mentioned. I just lost it. They say yeah. It allows us to execute arbitrary system commands on the packages dot org server. So that's where I was thinking that it was the other way around so maybe it's just weird. A little bit strangely and I misread it if so than I apologize.
I mean I'm sure packages also then uses composer
so yeah. So it probably it's probably like a symbiotic thing where it's like composer uses packages but packages uses a composer so you can kind of get that
then Steve tells people can look into obviously we overlook this.
Yeah, I mean it's kind of an interesting situation though so that's why
it's a little bit confusing but every guard Let's of the actual supply chain aspect of it. I think the vulnerability is kinda neat. I made the argument injections in general are kind of neat to see, just because you need some very specific situations to be able to take advantage of it. Like not every program has kind of an attack that you can do with an argument injection. So when you have it it's not always exploitable. So it's need to see when you do have that case. You know, regardless of the supply chain
aspect, Yeah. It's it's a cool bug class for sure. I agree. It's also one of those things where like with argument injection, you don't necessarily consider it. When you're writing the code, you don't consider the fact that Like they could inject those double dashes and specify an argument. It's just something that's easy to escape your your notice, I guess so, I always liked those kinds of issues. Yeah, with that said though, we'll move on to some iot stuff. Now, we have an advisory from iot and Specter, but vulnerabilities in Libra Wireless modules, which seemed to use Android as a base for the iot devices. As is the case with iot. The issues are kind of jokes. The first issue is Medication bypass because the backend endpoints just doesn't didn't have any authentication checks on any of them. Presumably. So yeah, they have like this front end where they prompted for a password but that was literally just for the front end if you just called like the the back end like the underlying apis you can just use. So,
yeah. That's the thing is very me moorthy. Oh, looks like the first one they're like. Okay. Yeah, there's you know, a TB over TCP as Rousseau you get a root shell. Well that it'll expose, it's not enabled by default, but you know, just said ATB mode, you know, the actual configuration to enable it does require authentication. I actually tweeted out about the second issue there. The fact that there is a get passed, so they have this binary service running on quad 7. Poor on pork quad, 700 custom protocol but they have a get past command. Why? Our get passed and why doesn't ever acquire
Authentication. Yeah. It's a pretty wild endpoint to have and you can basically use it to request a device configuration and plain text. And and there was another problem there too, that was kind of related to it. There was an info link through another command. Yeah. Well then, we definitely can't be man's. Yeah. You could leak nvram which contain things like the Wi-Fi, password, and account tokens. That could also just be right out in plain text with no authentication. So, just some really strong Just some strong me out some
very unworthy. He she it
was Yeah that's really wild. Filled the cats get past probably used by the front if elodie the password. I could see that maybe. Yeah, that would be
well. So it's not like and point get pass Phillip. It's all. It's still part of their custom protocol here. Sending the no bites their ending with get past then gets the password in the response,
which it could still be like utilized.
So could be. You somehow have like that. It's still You're writing this code, you've got to feel like something's wrong
here. This just screams like we don't care. I think, I think it's impossible to make these kinds of issues without knowing about it. I feel like it's just one of those things where they do it they're like, well, do we really need to like put in the effort here and care about this? Like maybe not they just continue on. So yeah, these issues are just kind of like messed up. I often say the issues with iot are pretty, like low-hanging fruit. Seems even worse than many of the issues we've covered with iot. It's like I said, it's like they didn't even really try. The timeline was also a bit of a mess because at first they struggled to get the vendor to even like confirm any details without signing an NDA. Luckily the issues did get fixed within four or five months but yeah, I mean it's not super encouraging to see nda's being thrown around in the vulnerabilities Vision process to a vendor of all people. Like nda's are somewhat common in other areas like if you're selling to a non-bender like maybe like zi or zi rhodium or something, but like going directly to a vendor and them asking you to sign an NDA is kind of it's not a it's a red flag it's not a good sign or put it that way. So yeah, I mean we kind of say like the response to these issues is is where we put more stock into and if like a company As well, even if the issues are means we kind of like cut slack there and it's like okay at least they responded. Well here it seems like it was even a mess and trying to address the problems. Oh yeah. It's hard to even throw a bone and that respect with this with this story. But yeah I mean if you're looking for some fun bugs that are just like really stupid, this is the blog post for you I think but yeah. Not too much more to talk about there. Our next topic is a little bit more interesting though and it is the Mac OS gatekeeper bypass. So this was published by Patrick Wardell. He's he does some some Mac OS stuff. I think we talked about him before I think we've covered some topics around it before but I'm I don't have those on hand but anyway this talks about a Mac OS gatekeeper bypass gatekeeper being the mechanism. Apple uses to enforce code signing and make it a real pain to run anything that isn't verified by Apple to The user against malware and malicious code and whatnot. Well, this bug was a logic bug in the policy, Damon, which determines to do on a file, whether or not to let it run or prompted with gatekeeper or whatever, right. The way that's tagged is through a file attribute, a custom file attribute, they specify called the quarantine attribute, and these attributes are checked on downloaded scripts and executed files. And it's also supposed to be checked in bundled applications which contain like the The Mach o executable files scripts, whatever. The problem is, in this case, in the case of bundled apps, it's possible in the path for script based applications. You can fool the policy Damon into thinking. It's not actually a bundled application and that's because they do a check on whether or not, it's a bundle by checking. If the application can contains a property list or info dot P list file, which is basically like a metadata file. It contains a list of properties like the apps main executable and stuff like that. So if you just don't include that file it, if thanks. Okay. This is not a bundle and then it takes a different path where it just sets that allow flag. So because of that confusion with allowing script based application being misclassified that allow flag at set. The script can do it ran and there's no like prompter or blocking of that getting ready even though it's not a signed or verified application or script so So yeah, it's kind of a complex issue. We did have another blog post that talked about this issue that I think was a little bit unaware of the implications there because it's not like, they don't just allow like they it's not like they just don't care about scripts. Like if you just send an application to be installed and it contains a scripted it's like okay we'll just run that. Yeah. So actually what happened. It's it was actually like an internal confusion that caused a path to get taken when it shouldn't
have. So Cedric Owens was the one who actually originally found the issue. Although he did Andrew cause it exactly how was happening. Just he can make these apps that didn't get blocked by gatekeeper. So he ends up talking about how we made them and that has the focus on the actual script there. Whereas the posts that were actually using on objective si.com folks is a lot more on the root cause or at least has more information about the root cause
here. Yeah. So like it's totally fair that they that issue and like didn't do like the Deep RCA and they reference this Objective C posts in their blog post which is actually how I got to it. But yeah, it's not like I just want to make it clear like it's not like Mac OS was just like, okay, we don't care about scripts the site of application bottles, they do try to prevent against that. It's just that there was an edge case basically, when it came to checking whether or not the application was a bundle. Or not. So yeah it was a little bit, little bit complex and in terms of the State Management this post is extremely long. It's like fourteen thousand words obviously, we're not going to be covering everything in the blog post. It goes really deep into the background of like gatekeeper and and the internals of it which is like very interesting but a lot of it isn't necessary for understanding the core issue. So like if you want that background information, That post is there for you and you can scroll through it. That said, if you were just looking for the issue, you should probably just skip to like the last couple sections. Basically and the conclusion stuff, that'll get you like pretty much where you need to go if you want more information on the detail itself. But yeah, basically comes down to a sub subtle logic bug. So yeah, pretty cool attack and it's fun to see Mac OS stuff because it's one of those targets where We don't get to see. It's super often. It's kind of the Forgotten one in the podcast. Alright, so we have another Talos post this week as well. This one a Linux kernel info leak through praca Fest through the proc did syscall file. The colonel does have to be configured with the config have Arc Trace hook, config option set, which I think is default set to Y in a lot of distributions that said, I didn't II meant to go and check but I totally forgot so sorry. But um, I believe that should be a default config but Anyway regardless that config does have to be enabled. So if the target has like a make file setup that doesn't have that done. This isn't a viable attack but the issue is an info League due to truncation on 32-bit systems fairly straightforward. The object that gets sent out contains fields for six argument registers. As un 64's on 32-bit systems though. They just leave the unused upper 32 bits uninitialized and truncate the right to a 32 bit register. And Issue because the proc did syscall function doesn't account for the 32-bit or 64-bit it just uses the long long format specifier regardless. So you end up getting four, bytes for each argument, register leaked from the stack. So you get 24, bytes of uninitialized, data leak from the stack, right? So stack based in Foley so this is one of those issues that would be killed by the one of the recent Linux Kernel Security blog, post, sweet talk. Talk about where they automatically zero objects that are put on the stack. But yeah, that's just kind of something. I thought I'd throw up there. The issue was introduced in 5.1 release candidate for and was patched on December, December 3rd. I will say as far as info leaks go, this isn't the greatest kind of in Foley can have as an attacker, like 24 bytes that aren't even sequential their kind of broken up. That's not a That's not an awesome info leak. There's definitely better ones out there, but I could see this being exploitable, if you can manage to get a pointer allocated in the like slot that you can leak of a colonel pointer, and you can disclose the upper 32 bits of that pointer that can be extremely useful, especially for like a colonel. If you were looking for a kernel Heap, address or like a colonel text address actually, like the aslr happens in those upper 32 bits. So, yeah, this could totally be useful, but it is kind of not the greatest in Polly guy. If you are exporting, sorry zi you were going to say something there.
Oh well, it's going to say pretty much what you just said there, is the use case for it, being able to leak, the upper bits or potentially, getting some upper bits
there. Yeah, you can probably get like you get pretty creative with this exploit in terms of how you can, how you leak stuff. I will say from past experience working with stack-based and Pollux is not fun because you're basically tied to. However the application is compiled and what you can call to line up something to leak there, like you're kind of limited in what you can do and it's you basically got a Brute Force to try to find something useful. Leak. So, this could make for, like an interesting exploit if somebody wanted to take this and try to change it with something else. Yeah, I mean, it would be interesting to see like, how they abuse a leak because those, those things are always fun to see because you have to put in a lot of work to exploit a bug like this. So it's one of those situations where the bug is really simple. It's really easy to exploit in a, in the sense that it's easy to see the results. Like, hey, I have like uninitialized kernel data here, but in terms of Of actually using it. There's you're gonna have to put a lot of work in so yeah it's one of those fun ones unfortunately though where this is Talos, they don't really go too deep into that process though. This this blog post was a little bit more in-depth than they usually go on some of the previous reports we've covered. That's probably just due to the open source nature of the of the Linux kernel though. Although on browsers, like those were open source and they didn't really goes deep anyway, like they go into the bug. They show the crash report but they don't really talk too much on like the exploit ability and how it can be used. Which especially for this issue is totally fair because on its own. This issue doesn't really get you anything. You would have to chain it with something else. So yeah, it's hard to really do a write up on the exploit ability unless you use like a toy box or something to demonstrate with. So we have the Tesla topic that somebody was asking about earlier we have a remotely accessible Tesla exploit and this week's episode from kunam on, I think, is how you say it? Which details next boy, that was submitted in pain dome last year. This exploit was also detailed in a white paper and a talk that was given at can't SEC. West 2021. So there are slides available as well as the vaad for that conference presentation. The white paper I think is probably the better medium for it if you just want the exploit details. When I looked into the slides it seemed like they had a lot of like build-up to the issue and talking about like other issues that were found with Tesla which is totally like I'm not saying that's useless or something that is some very cool details and I will probably check out the talk as well. But if you just want the exploit details on the white paper is probably what you want to check out. It starts off with detailing the entry through Damon's running on the service Wi-Fi network which uses hard-coded WPA2, psk credentials which are used on Model Threes. And also the model S on the model X, they came across a Daemon that originally came from Intel called con man, which I gotta save. Like, that is a great name like props to that, but it's a lightweight Connection Manager. Basically, they wrote a quick Ethel harness and buzzed it and found crashed. Just instantly which isn't really too surprising. This is a target that probably hasn't been picked over to. Well, they detail to of the issues they found in detail. The first being a stack-based out of bounds, right? When parsing domain names, when proxying, they have this fixed size, buffer of X10 are sorry. They have a fixed size mem copy of hex, 10, bytes into a destination pointer, but it does do a bounced check on the destination by over to ensure that it's within the target buffers memory. So they can give you an out of bounds right on the stack stack. Cookies here are in play though so it's not a super straightforward exploit here re just smash the stack and when they had to get a they had to get around this. But using a clever trick with how the uncompressed function Works which I think you found really interesting zi.
I thought you said trick was interesting that said I thought this was in DNS code. Oh, DNS proxy. Okay. I miss that. I was a little bit confused by where you're getting the proxy from Okay. So DNS proxy anyway. Um yeah, I thought it was interesting. I've actually got the code up here on on stream now to they would end up iterating over there, going through the records. Decompressing it they copy it out into their temporary buffer and the issue like they get this linear overwrite that Specter was just mentioning because they do this mem copy. Online 41. It's just the fixed size it. They don't do any bounce backwards. As first one, they're doing it. Str and copy with like the remaining buffer size and then they increment, the
the pointer position to say like notes further along. So, the trick they ended up doing here kind of took advantage of this subtle bug in that they would calculate this you length, or you'll end using this Str land a function. They just call that on the name. But when they actually would copy it into the buffer, they're using Str and copy. So that's where there are passing in like the remaining buffer size. So they could actually have a decent between how much they increment the pointer by and how much Str n copy. Actually copied out. And so with that, they could have an increment further than the buffer actually wrote and using that they would be to skip over the canary and just write the saved return address. So I thought that was kind of just a neat trick to go about this and just a little subtle
bug. Yes, sir. I just had to step away for a second. So yeah. The the second less interesting bug that they detailed but it was still necessary. Was in the DHCP packet processing by sending an option packet. With no option data, you could basically leaked data through that uninitialized, option data because the packets backing buffer is never zero to like it's not memset or whatever. So, that gave them an info leak in order to leak return, addresses to bypass a SLR, and be able to full chain it for RC. So yeah, that gave them are seeing the connection manager, which let them perform various actions like access the cars at infotainment system. While it was parked, they could unlock doors, they could unlock that Pop, the trunk, all that kind of stuff. So it didn't compromise Drive control, which is a good thing but it basically gave you access to anything else. So. Yeah. Pretty important, tackle RC and like you said, I think the export strategy was pretty cool. It's I don't know if I've ever seen anything like that. With like how it was taken advantage of. I can't think of a text, when it seemed that the use of that trick,
it's kind of a special situation where you're actually able to increment over it versus just having a linear over, right? Yeah. Yeah, I'm not sure if I've seen it before. Not like I've definitely seen things in messing with the pointer increments before. I'm just trying to think specifically to get around Stack, canaries or, or if it was just like, get around a crash or
something. Yeah, it basically comes down to just being able to like leapfrog that stack Canary. And there is this kind of statement I've seen floating around from people where they think stack canaries, completely kills stack-based exploits like stack overflows are dead? Because stack canaries exist, I mean, yes, a straight-up linear stack. Smash those kinds of issues are dead. Well, you can start
start carving like the data on the stack that, which maybe you don't have. Thing. But the option is there for your data-oriented tax.
Yep. That's another thing. But like people often forget that like there are stack based out of bounds, right? You can have where you can do that leapfrogging where you can just jump over the stack Canary and those are always fun to cover because like it has a special place in my heart because we actually did that on PS4 Cortese 4.55 exploit. I think it was a BPF stack-based bug while was don't like it was a race condition that led to a stack bug and that was basically what we did there was we just jumped the return like the stack cookie entirely to just hit the return address so as of the tax because they're fun to see. Yeah well we'll jump into our next issue though, which is from SSD disclosure. It's a bug in Netgear. Are 7000 routers, as well as some exploit details and some juicy drama. First, we'll talk about the bug as always, Which was a heap overflow and httpd they submitted through zi. Zi the issue is improperly restricting. The HTTP post request, payload against the content length because as an attacker, you can control the content length which is used to allocate heat buffer which then gets the data copied into it through mem copy because you can control the data and the content length separately. You can cause that this discrepancy which can give you the Heap, overflow scenario for exploitation, they Attack the allocator itself with fast Spin Attack. They basically corrupt the fast spin to return a chunk with a controlled pointer, which you can use to get an arbitrary right to do this, though, the you do need some circumstances to lined up. You do need to have to allocations from the same. Fast been made in succession and they also can't exceed the threshold of the FAFSA and size or else the allocator will just consolidate the chunks together to satisfy that request. They found an end Point that satisfied the first condition but not the second it would take arbitrary file uploads and it would do like to allocations in succession but I believe one of the allocations was like hex ten thousand bytes or something which is obviously way over the threshold for batsman and this is another case where they do a pretty interesting trick. That was specific to the scenario. They were able to take advantage of and that was abusing their Heap corruption to cause another issue in the UC lipsey. Function. So this is where earlier I was talking to, you see live? See this was a topic, I was thinking about that. So yeah, they're free function. If you tried to free an eight by chunk, it would lead to an out-of-bounds rights that would evaluate to negative 1 and the reasoning for that is usually I you can't create a byte chunks, I don't believe and you see live. See it has a minimum chunk size, 16 bytes, but because you've already broken the Integrity of the Heap, you can forge an eight by chunk and that's kind of where you can create this issue. Where Are there previously wasn't one in the in the free function, so they use that to smash the max fast field of the Malik State object, which happens to line up with the negative one indexing, and the array. That way, they can smash that threshold. So that even though there's like a hex 10,000 white allocation, it still ends up going into the fast Spin and they combine that to override, the god, entry for any function, they want to get code execution. So yeah, another bug where the issue is pretty.
the exploit strategy was was really interesting and unique. So
yeah, we've got cyclically topics. This week is simply, their choice are being able to Target. The max fast was interesting. I mean the actual Heap exploitation overriding the forward pointer. Standard fare when it comes to, like, that's kind of what you're targeting. Whenever you get a corruption in the fast been like Fast men do. Well, kind of Target that, so, fairly standard there. But yeah, going for Max Aggies, and then negative 1 index, think so. Neat
trick. Yeah, Kotori 1965 from Jets said, I'm salty. I looked at that, binary and mist of all. I kind of, I kind of resonate with that, with some of the targets. I've looked at when I've missed stuff a, you definitely feel dumb, but it's just one of those things where you're not going to catch everything, right? So and it happens. But um, yeah, I think we have one more. Exploit topic of the show before we get into shoutouts to start wrapping things. Zup. It is a Parallels Desktop post from zi zi. Parallels Desktop is like a virtualization software for its use on Mac OS and they have 2 hypervisors. They have this proprietary hypervisor and they have the they use a Mac OS hypervisor and what they target here, in this blog post is the proprietary one from parallels. So, they talk about some of the reverse engineering they had to do, because obviously, we're its proprietary, And closed Source, they had to reverse. Luckily for them, they did have like function names symbols weren't stripped so that made it a little bit easier and that process the function, they target is the OTG handle generic command function which gets called through hyper, call them the UEFI firmware, they found to vulnerabilities. I keep overflow and a time of check time of use. Basically, it was a double fetch info. Leak the Heap overflow was the result of the size of UEFI, variable names not being Sedated when past commands. So one of the times to read the variable, the variable name into the request name in The Host, Colonel T, it can overflow, right? Because there's no restrictions on that size. They have a snippet of a panic vain boat by triggering that bug. They don't take it to like blaxploitation though the second issue was the time of check time of use, which I thought was a bit more interesting. It's essentially a double fetch when reading variable names or praying variable info, which it does Revalidate on right request, but it doesn't revalidate them on read requests for whatever reason, I guess, they just thought like Reed's were safe. So because the value it's fetching is in the guests kernel memory, which is attacker controlled. They can just put a good value in there. Then swap it out later on for a malicious value. Since it never gets revalidated, they can use that to trigger an out-of-bounds read and get a very useful info leak, which should be changeable with the previous bug for a full Escape. They don't demonstrate that attack, but but it should be possible.
It's not necessarily that it doesn't validate. Its that it relies on the user landside to do the validation that when it gets that status zero back. that's when it goes like okay to invalidate by the user line side to say, it's okay, but there's a trace in
between Yeah. I mean I love these types of issues because I just love race conditions in general. But double fetches are unfortunately something you don't see super often anymore like you used to see them in Colonel but
since the last time we talked about hypervisor actually, was another double fact, we've definitely talked about some double fetches relate to hypervisor. So I feel like that's something to look for.
Yep. So I was just about to get to that. So like Colonel you used to see them kind of but since the mitigations of come along to Directly to referencing user memory, they're pretty rare nowadays, and like, pretty much extinct. Most of the time getting data from the user, as written, in a fairly secure way because you're forced to use the apis. So it'll grab the value of sword and then Colonel control buffer. But like you were just saying, hypervisor seems to be the one area, where double fetches, still seem to be a viable bug class. So yeah, I mean, if you're if you're looking to do hypervisor research, it's definitely one you should be keeping in mind. Mind and it's I I feel like we say like kind of the same things a lot on the show because we do but double fetches are. Another reason why centralizing code is important the way that this class of issues was killed and like the Linux kernel and stuff and then FreeBSD kernel, whatever is because you centralized around needing those apis to cross the kernel and user boundary. So that's, it seems interesting that they don't already Try to do something like that with with hypervisor. I guess it is a little bit harder because you're dealing with like two separate systems set of just a kernel user land isolation, but yeah, like centralizing everything into an API for fetching unconcerned. Untrusted data is the best way to work around these types of double fetch issues. So it's just kind of a point in favor of the code centralization as you call it C, which it shows how like important it is. Is for defense in depth. But yeah, that summarizes all the XY topics we've had for this week. We do have a few show notes, as well. I think all of them are yours, he swallow, you take them away.
God, lots of shout outs this week. Actually, like we got five.
So, yeah, we have been of longer segments
for that. Yeah. Well, I'll go through them kind of quickly here first. One exploding gun document Hardware blocks in LPC 5. 5 S 6 9. We were going to include this as part of the episode. Fortunately episodes kind of running long. I don't like the cut content just because it's running long, but it's also an area that I don't think we're going to be to comment very strongly on unfortunately. I actually didn't get a chance to fully read this, but I've seen a gang linked around. So I still want to include it seems like it's a really cool right off. I just can't give a lot of details on what's about because I haven't actually read at but just came out. So, you know, check it out. Hopefully, it's interesting. We also have one bug here from Codes with the python standard Library, IP address,
Yes, it is the same bug as we had in netmask where it would accept octal IPS, and basically, encode the mirror, read them as octal rather than, as decimal and just trimming the leading zeros, same issue as we've seen before. But I mean python standard Library, somewhat impactful there. Bishop Fox put out a post that Bell Tampax basically introduction to like some software defined radio, just kind of getting involved with that. Looking into that a little bit. Some of the hardware options just really basic introductory coverage. This is an area that kind of interest me. So like I've owned hack, RF one for several years and haven't really done a lot of with it, but always kind of Meaning to do something with it. So hopefully I'll get around to that eventually.
Yeah, for sure. Definitely. Just like the hak5 stuff that I have that I will definitely use soon
hak5. I mean I could go on about that but I feel like so many people by that and just become like the pineapple and stuff and it just becomes a novelty because they have no idea what to do with
that. I had some ideas for what I bought with Hi-Fi but yeah I just it's one of those things where I buy it and I know I'm never going to use it.
Yeah. I mean I I got my, I got my hack. RF free back when they first launched, they gave him free to the first slide. However, many people. So I got it back then but otherwise I probably wouldn't have bought it.
Okay, so you're kind of you don't feel as bad I guess but knowing it since you didn't pay for it,
fair. I mean, I still wish I'd use it more but yeah. And then next this was he pea a heap or a happy Heap editor based on GDP Jeff basically, just looking there to kind of understand the he what looks like a little bit better snapshots looks like a really cool tool just saw this over the weekend probably could help with some ctfs, maybe Defcon ones, maybe not, I don't know how much he there was with this year's but look like an interesting tool to check out. and lastly, I think live ql So I've talked a few times about code who own throughout the podcast episodes.
Yeah. Anyone's list for a while knows and sees a
fan. Yeah. Like I think it's a cool Tech. I'm excited to see how it goes. I have a few issues with it part of that to been just the Learning Resource so though there's definitely been more since GitHub has made it more accessible. One of the things they've done here is as live ql where they partner somebody who's very knowledgeable with code. Well with the security researcher and they'll talk back and forth, work on some queries. This case it was for a paci Druid vulnerability. Um, I don't remember if we might have covered the vulnerability actually on an episode, I don't remember for sure. I know we've talked to a druid before either way. Really interesting talk back and forth. I learned some interesting things about Coach hole that I didn't even know it could do out of it. So I imagine others might find some benefit from that. So the links here and there's a video that goes along with it and
that's all the shoutouts. yeah, just taking a quick look to see if I can remember the volume but it's yeah, it's not ringing a bell but like you said we've definitely covered Apache Druid at least once I think we've covered a few times but
yeah, I've got it as covered in episode 71, although that's not the same bald, so maybe not Yeah. All right.