Bad Patches, Fuzzing Sockets, & 3DS Hacked by Super Mario< Back to Episode Overview
This transcript is automatically generated, there will be mistakes
Hello everyone and welcome to another episode of the day Zero podcast. I'm Specter with me a zi today. We've got some of the big news around the faulty, patches introduced in the Linux kernel, the cellebrite hack back and the return of 3DS, I 3DS exploitation, as well as some kernel fuzzing from NED. Well at P0. So we have quite a few interesting topics today. So we'll start off as always with some of the news, this won't was big news is circulated around on some channels where I wasn't expecting though given the Topic. It's no surprise, really. It's around in the University of Minnesota students, getting the University band from authoring commits, into the Linux kernel, and also getting their past commits reverted. And and flagged, for review, The Catalyst behind that drama was a paper titled. On the feasibility of selfishly introducing vulnerabilities in open source, software be a hypocrite commits, which we have on-screen zi o. Let you start first here. Then I'll go into my thoughts on it.
Yeah so there's kind of Of two things about this. So this paper that we're talking about, the stealthily, introducing vulnerabilities that paper is from kind of, at least I found out about it. I've actually been looking for since I want to say about November of last year, the author of it publish it initially and then took it back. It's been accepted to Oakland. So will be officially published at this point. It's Ellen Terry Pub. After Oakland 21. So that's next month. So This research already happened but just this past month, there were some new commits made to the kernel and that's kind of where I lost the drama's happened. So the new commit at least Greg asserts that these are no one invalid patches. Basically, introducing vulnerabilities into the Linux kernel, pretty much what you'd expect given the title of the paper. Things like said This research already happened. So I have I have some questions over whether or not this this more recent incident was actually related to that looks like said This research has already been done. It's the papers been accepted. So this would have to be some sort of follow-up paper, and that's definitely possible. Like, they've decided to do extra research. It just it violates some of the things that we're told about the original research with this original research, they Made it three patches and no point did they become commits to the actual Linux kernel. All the patches were kept just in the mail or all the sort of the vulnerable patches were left just within the mailing list. It submit to the mailing list for review, but they wouldn't submit or if they want to try and commit that and immediately once a reviewer signed off on it, they'd be like here's actually the good commit so they would make sure they were going forward with a secure commit and not trying to actually introduce this. Into the code base itself. The seems so violate that with whatever happened, this particular case, I should have had the link ready to go. They submit the patch and apparently it's several of these patches made it through into stable. It's just I don't know. It raises questions to me because it is violating the things that we've been told about the research already. So it's possible that they're just lying it just I don't like to say that the academic researchers are just straight up lying especially when the statements came out. Well before there was any drama of it, Sound of Specter. What do you think? Like, do you think there's you think of just
clear-cut? So I can offer a little bit of insight on the patches. They're basically Greg was calling them out because they said, you know, we've had known, you know, bad Patches from you before mentioning that paper. What the like patch author rebounded with was the fact that this wasn't actually from that study? This was from a new tool, they'd written. It was like a static analyzer that it automatically generated. Patches. I'm just I was just looking for the
yes, message really quick tip. Bring that up the patch author actually made that claim that it was from a stack analyzer. Well before there's any drama that was before Greg actually came back with, you know, saying they were doing this, somebody else had asked like, hey did you find this or was this like a tool to hero? And he used that tool. So, that story at least it's consistent with what the patch author said about a month
ago. Yeah, so I think Greg's argument was basically like, you know, you've been known to submit bad patches in the past. What was I supposed to think seeing more obviously incorrect patches and then going on to say like they don't appreciate being experimented on or tested with patches either for you know that study of the paper of trying to inject vulnerabilities or for this like a legend static analyzer without even pointing out that. That's where the patch came from. Like they mentioned you should have put found by Tool. We are not sure if this is Please advise so say that's about a weird situation,
I don't know. They say that like you should have done that and they probably shut off. I'm not necessarily disagreeing with that. At the same time like that's a pretty minor thing to then jump from. Well you didn't do that. Therefore this is absolutely, you know, more intentionally bad commits and this big problem and In fairness one thing that I will add on here is that the University of Minnesota has not contradicted. This claim and I feel like if it really were just static analysis, told somebody wrote and they try to come in, you know, then all this all these problems happen. Had it been that. I feel like they would have been able to stay that already and like kind of Some proof on that in that regard to the fact they haven't offered any contradictory. Statement makes me think that Greg is probably correct here about what was going on. But there are still at least a few questions at least to me about whether or not because this is this would be part of some further research at the very least, it's not part of the original research, it just, I don't know. It seems like the first research paper like they took a number of good steps to it. I'm sure they're not creating too many problems. And then this one just seems to throw that away. Something just doesn't add up to
me. It seems to be a lack of oversight on the on the University side of things. So there has been almost like unanimous support for the Linux maintainers here and support for the Banning of the university and support for the University seizing, the study and doing their own internal investigation because they were basically pressured into doing that. I will say like, if this was a result of that experiment of trying to introduce bad patches, it is kind of To do that on the Linux kernel which is used on like millions of devices worldwide
against they took the steps. So to make sure that the vulnerable patches weren't going to be introduced like said they didn't create any commits for them as soon as the somebody signed off on them, they would immediately provide a proper patch for it. So like they were taking the appropriate steps to ensure, they weren't going to introduce actual bad code because at least with the original research paper, they were that's the paper that we've got up here. Like said that seems to have been violated by the second run that we don't really know what's going on that we don't necessarily know was part of the research. But at the very least slit like I agree with they shouldn't be introducing bad commits it just it goes against everything that they said hop in the original paper and that makes me think that something else was going
on. It almost seems like it could have been like a rogue research or something.
Why use the Minnesota email, which is actually another thing. All of the commits starting, the research came from just Anonymous Gmail accounts. They did not use their University of Minnesota. Email is whereas these commits and the ones that are getting banned came from University of
Minnesota. Yeah, it's really strange. Like I said, there has been almost unanimous support for like Linux maintainers here. That said I have seen a few arguments go against the grain. A bit one of them was was what you were kind of talking and bring up there, was the fact that we don't really have all the information and it seems like there's people that are very quick to jump on and just go like oh bring like bring out the pitchforks. Basically, whereas it doesn't seem as Like, cut and dry I guess as to what the situation actually like led to this problem. Another argument that I saw a few times was the argument that okay, punished, the researchers that were responsible for, for authoring bad commits, but you shouldn't be punishing the whole university. It seems like they're trying to, you know, blanket punishment. I guess was the term that I saw thrown around,
especially since the commits in the study didn't even. Come from University of Minnesota email, addresses, like Banning University of Minnesota email, addresses is a Bit excessive, I agree with now going to be experimented on at the same time I do think the research potentially had some value. I think this paper actually kind of Falls a little bit flat in terms of that. Oh, Basement. They introduced three patches their success. They go in successfully and like, they, their recommendations on mitigations are like, well, you should update the code of conduct because the code of conduct is definitely going to prevent anybody from, you know, doing these sorts of commit. Like it's an interesting concept because what they did was they did commit so introduced the conditions that would allow for a vulnerability to happen. So in their code itself, now, if them were actually Would be in the actual call. Trace of like a vulnerability happening, or they use after free happening, it would just introduce the conditions that ultimately allowed it to happen.
It was kind of like stealthily
introducing them. Yes well I that's right in the name there and like it's an interesting concept and I think studying the patching our patch review process is a fair thing to look at of. I think they could have gone about it better like involving somebody from the like a senior maintainer, whatever somebody on that side being involved more like a penetration test, where somebody on the other end knows about what's going on is involved. Adding fun to step in and make sure you know, things are going going to crazy. Like they definitely could have approached this a lot better than just going at it. Not going to have any discriminatory but I it's not that I think the research should just never happen.
Yeah, so I think overall if I had to sum up like thoughts on this issue, I do think there were a lot of people, there were a little bit too quick to jump the gun. I kind of fall on that side of of people, being too quick to persecute and it's a little bit worrying because, like, when we've been covering this issue, you can tell like, it's a very weird set of circumstances. It's not as simple as people were. Like, the University was an intent intentionally. A study to try to submit bad patches. Just because of that paper that was written. Like you said, it kind of could the the situation kind of conflicts with what they had set up in the paper. So it's kind of a I feel like we don't have enough information in the like public perspective to be able to make a really solid case, either way. So yeah, it did seem a little bit strange to me, but like so many people were willing to go directly to the Pitchfork angle. Where the situation was just kind of hard to read into. That said I do kind of respect wide maintainers would be upset about receiving these bad patches. Maintaining the Linux kernel is already extremely daunting, especially on the security side. They're already kind of stretched thin because companies don't want to pay for the security-based up for the colonel. This has been documented, you know, they just want new active feature development, they don't really care about the the security angle. So by adding the additional weight on the Containers for trying to test like your static analyzer, whatever? Like I do kind of feel bad for the maintainers and that sense. But yeah I mean it just from the public respective. It's hard to really give a really informed opinion because I just don't think we have the information necessary.
Yeah that's the thing. Like I think we're missing something, I don't know. Obviously we can't really know what we're missing that we see every but yeah it's it's just the Whole thing is felt very weird obviously to blowing up quite a bit. The, I mean some of the patches oh like are no one as or at least something maintainers of said like they're good patches. They actually fix something. So like there's the one here that kind of caused everything which was what was believed to effectively, be an impossible situation. Maybe it is mean that seems to be the case. But Yeah, I don't know. It's I, I feel for the maintainers not wanting to have their time wasted, but I also feel for, you know, University of Minnesota here, especially until we know exactly if they were actually introducing malicious commitment. They're like, absolutely, I'm I'm not going to support that by any means. I don't want to come down as supporting that
at all. So it'll be interesting to see if anything publicly comes out from like the University of Minnesota's investigation. I think they did make like a public statement, but it was very vague and just kind of saying like, you know, we're going to start investigating this and we've seized the study related to the paper and like yeah, just not really much information there. I don't know if we'll get to see anything else. I could see them just kind of keeping that internalized or maybe having some back and forth with Linda's. Anders but just in like the mailing list and if, you know, somebody doesn't pick up on that and share it around, I can see that easily just being lost to time. But yeah, I mean, if there are any updates to this will try to cover them on the podcast because it's it kind of sucks but it's like yeah we don't have all the information so it's hard to cover super well that said we can move into our next topic that also blew up this week that was exploit related and that was supposed for Moxie at signal who basically did a hack back. Kind of at a celebrate, which celebrate is and Israeli forensics, or phone hacking company, right? So back in December, a celebrated aus-8, added support for collecting messages and data from signal which is kind of where the hack back comes from. The blog post goes into some detail about celebrating, what they do noting, two primary tools that they use being you Fe D for backing up devices onto Windows, kind of like what? A iTunes does, but also with like, Android devices and whatnot, and the physical analyzer, which takes those backups and parses them display. The data on a, in a browsable format, incredibly when Moxie went out for a walk, they saw a celebrate toolkit that had fallen off of a truck which had all the various adapters and dongles needed for using these tools. Yeah, it was almost like a TV show.
This sounds like a very realistic situation, you know, 100%
truth. So yeah, they decided to take advantage of that opportunity to look for
Yeah, it's kind of covering yourself. Yes speaking. So, yeah, they decided to look for vulnerabilities. They expected the software to be somewhat secure, given the nature of the company, but they found that the opposite was true for one thing, which is like the main premise of the post. It bundled ffmpeg libraries that haven't been built or updated since 2012 which means you wouldn't even have to look for like a node a in their software. You can just try one of the hundreds of end days that have come out for ffmpeg since 2012, they vaguely go into some of the issues that could be utilized to it or like some of the attack scenarios that could be used to attack celebrate, for example, planting a malicious file on devices. To get scanned to modify all reports that are generated by celebrate either from that were previously, scanned, and future, scan devices, be, and there's like, no Integrity checking on on those reports. They then also have this cheeky quote about, we'd be willing to responsibly disclose Abilities. We know about if they do the same for all the bonds, they use and their physical extraction, which obviously isn't going to happen. I mean, that's their entire business model is breaking into is breaking into and recovering data out of those phones. They also talked about some of the various other dlls being used and how they likely don't have the rights to redistribute those on their software. Like, I think there was various Apple dlls, but that's by the by that's kind of just tacked on at the end. That's not really relevant to the central idea of the ugh post but ya zi. Remember you like you were kind of mentioning around the legal implications of of the research being on kind of shaky ground.
Well just because he'd have to agree to terms and to all those terms are going to include like you can reverse engineer, you can't look into it. It's pretty standard. See that? Oh, I will say that signal has also had, they're completely unrelated news that upcoming versions of signal will be Erotically be fetching. Files to place with an app storage that are never used. But you know, and completely unrelated to the fact that those files could compromise celebrate of On the legal side like because celebrate is also going to be used for data collection. This calls into question, pretty much everything that it collects, if it can be compromised so easily. Like, you know, if you're able to get code execution on the device, you're able to write your own files and basically modify or change anything that it seems to report. I think, for a while here, this is going to be some that in you. I mean I'm not a lawyer here, but it's going to be something any lawyer could go ahead and use and basically dismiss anything that defines as potentially having been planted at least until they determine some friends. XR proof of it actually being compromise, some sort of indicators of that compromise.
Okay. So yeah. I mean I saw some some interesting takes on the signal stuff like in the twitterverse or whatever. I saw quite a bit of support for Moxie saying you know kind of go get them. I guess there were also some people who were especially on the semi-finals.
Yeah I mean I see no problem with that. On one side, celebrate isn't exactly playing Fair signals going. Kind of they're playing the same game. So I don't see a problem with that, they do mention that they expected the security software to have, you know, better security. It's been my experience at security suffer. Quite often has some bad code, it's like it's weird. But the more secure you'd kind of expect something could be the worse. It seems to be just for my experiencing the Consulting and stuff of having dealt with some security products. Like, you know, I expect a lot better from so many companies and like they tend to be the The worst
offenders. Yes you can. You can coin like a law for that one policies like I guess for now. But yeah no I'm sure somebody information done that already probably but you know, we'll just pretend like you did at first but yeah, it's an interesting scenario. I did see some people kind of, I guess questioning signal with, with placing these files on the device. They do note that it's never used for anything inside signal and never interact with single software data. But Yeah, I did see some people that were kind of, they kind of latched on to that point. We're a bit worried. But yeah, I mean, I don't know if I think people were kind of blowing that up at a proportion, just kind of taking that in isolation, without any context. So,
I bought I'm not even sure that signal would actually be placing any files on there or if they're just stating that
they will. Yeah, could just be kind of like a does he even beat throwaway line.
Even the statement that they will calls into question the result of celebrate? Because it's the potential for it to have happened. And the fact there were, if it doesn't matter if they actually do it, they're calling into question the results just by stating that they might Yeah.
All right, so we'll move into our next exploit, which involves a privilege escalation on Ubuntu through overlay FS. The issue has to do with permission checking when it comes to setting file capabilities and separating those permissions from namespaces. Essentially it tries to ensure that whatever file, you're attempting to set capabilities on you have permission to like modify that file in that file system. So if you create a user name space then mount a file system, you're able to set capabilities on those files. But not on a mount say like on the root user name space that's that's privileged off the problem. Here seems to be that overlay FS bypasses the set X @ ra routine that would normally check for those permissions. It just skips to the underlying routine that gets cold. It doesn't use the wrapper routine which is the fs said Etc. So because of that, you could set arbitrary capabilities outside of the outer namespace, which they use to say capabilities on their own process and they give themselves. Elves like root capability, sysadmin net admin, everything essentially which is why you get that root LP aspect. So this is
fun phone ability. Only impacts Abu to smooth. His only wonder where you're able to do this as just like your standard user by
default. Yeah. So this affects Ubuntu 12.10 as well as the all the recent LTS versions pretty impactful bug. There are some potential mitigating factors. If for whatever reason you couldn't like patch and you, you don't use user name spaces for anything. Disabling username space is probably a good step to take just in general since those are on by default and they they can expose some pretty powerful attack surface. So yeah, I mean that's just good idea in general to disable being able to use or create username space on under privileged to users. But yeah basically just comes down to that check being too high up in the call stack, which is interesting because there might have been other file systems that could have been used to get around this to either currently or maybe in a future file system. So, the way they fixed, this was was kind of smart because what they did was they added the capability, checking or the permission checking into that of us. Etc. So it's kind of future proof against like any future types of issues that could be found as well. So yeah, smart fix and an interesting problem and one that I don't think we see too often but I think is pretty common just that idea of a developer writing code and then thinking well I don't need to use this rapper, I'll just call the underlying code and not realizing the implications of doing that. Yes, just kind of easy to accidentally subvert security mechanisms. So and that's essentially what happened here.
Yeah and I mean that's I commonly hit on the point of centralizing security and doing centralized Tex. This is just kind of a example of that, you know, where you don't want to force a developer to remember every location that they need to do something. Ideally the more you kind of automate automate away and just have automatically done the better.
At least for security. Yeah. Alright, so our next issue is in Synology diskstation manager, which is a modified Linux OS, it's used for Synology Nas devices. So zi I know you wanted to take this one, so I'll let you go ahead.
Yeah, there isn't too much to say here. I just, I don't think we've talked about on the podcast app armor before, actually, I think we did once once or twice. So it's, we probably have but not, this was just kind of a funny issue, the Sinnoh, search a And has kind of a Miss configuration within tap on your profile. So if you're not familiar with a Palmer, the idea is you create this app on my profile and then yeah. That kind of specifies like what the application can do. What I can access, its kind of phone. It easier to set up version of like selinux a farmer profiles are at least they're not painless to set up by any means, but they are a little bit easier at least. Seats have this profile. But anyway what they found quite simply what the profile allow them to just load new kernel. Modules on a defeating anything because you can just load anything and have kernel code execution. There's no more vulnerability, there's no trick to it. It was just a bad profile but it's kind of a stupid vulnerability
that's a bizarrely large hole to have. Like if I was trying to set up policies to you know sandbox on Said users. I think the first thing I would think of was. Let's not let them load arbitrary kernel modules. So
by default is like blocks everything and then it was. So you could either write the policy manual are kind of look through the audit and kind of approve certain actions and then get into the policy. So most like somebody just kind of approve this very general sort of rule into it that end up allowing yet. Fair enough. It's kind of bit by the some of the setup options I
guess. On what you mentioned earlier about a farmer. I just did a quick, like, note search, we haven't talked about a farmer as far as I can tell, we have talked about fruity armor, but that is a very different thing. So, yeah, just thought I'd throw that in there. But yeah, I mean what you have kernel code execution. Like you said, you can basically do whatever you want. There's no like, hypervisor. Anything preventing you from taking full control the system. So, yeah, Colonel gives you everything. So we have the return of gitlab issues in this episode. For some reason, hacker one is kind of failing to load for me, I don't know. Okay, there we go, refresh, fixed it for me. But yeah, we have the return of gitlab issues and the return of cramdown, which is used when rendering markdown and Wiki Pages. Zi ll. Let you take this one away as well because I was a little bit late. Getting to this one.
Yeah, so this one, we've talked, I believe we've had our see in crime down before This one definitely different different person also, I think it was deaf Core last time or somebody at deaf core that we talked about last time with Creme down and get lab, but creme down, it's like the markdown process, our parser, so guess lab. You might have, you know, a markdown file or GitHub. Sorry, this is Okay, sorry it is gitlab. I got yes. Is there because they have, if they talk about it, calling, the GitHub markup render. Looks like that. They're using that jam. So anyway, it's get lab. Oh, what crime down and up allowing you to do is you can actually set options for the parser in line so within your actual markdown file And they show the example of that just being like the colon colon options and then, you can set whatever you want. There, in particular, they use Rogue as their syntax highlighter and that supports these syntax highlighter options. So what ends up happening? Is that you can specify the for matter which ends up being loaded as a class? You can specify what class it's going to try and use as a formatter. The class does have to match this like starts with an uppercase character and then as alphanumeric and underscore characters only most classes are going to follow, that's, that's not a huge deal when you can effectively get any class instantiated there. And then, you can also control the options that get passed into it. You'll notice here just takes another object there with various options that get loaded then. So what they ended up doing here for code, execution is effectively they use the redis class, gave it a driver which can end up giving them. Are you the tell what file basically to load another Ruby file to load as a driver? Which they were able to just upload any will Ruby file and it's get lab. You can upload files into whatever and they can know where it is. Just based off of the general code, not a huge deal. We've kind of talked a little bit about that before but they were able to upload a file use that as the driver for redis and get here Ruby code execution. Thanks an interesting attack surface. I didn't know you'd be able to set the officer there or that it would even go. So far as in snatching the class based Name without much checking so that kind of seems like a very questionable way to go about it at the same time if you're trying to support, you know, other people to develop their own for matters. You don't have a lot of choices there. Like you've just gotta support, whatever. So I thought was an interesting attack to go or interesting way to go about it fairly straightforward on a hole.
So I thought it was kind of funny when you got like the get look, get lab and GitHub mixed up because last time we covered cram down which was on episode 50. By the way we actually covered it in GitHub Pages was where the issue happened last time. So because the topic name was GitHub Pages multiple RC he's via insecure cramdown configuration. So yeah, it's kind of used in both GitHub and get lab. So that's the that was kind of a funny point that I thought I'd bring up. But yeah, I mean Seems like a pretty interesting attack. So I did kind of Wonder one thing with the asset. It says the asset that would get hit is your own get lab instance. So this wouldn't affect like, get lab.com like the managed instances, or I just want to verify that, because like the Bounty payout, they paid out was was pretty big. It was $20,000, but I just want to make sure like it was only on the like, self-hosted get lab instances.
You know, I didn't look into that actually so I can't comment on whether or not it is only that I assumed this was hitting all of them, like I assume the surgeons hitting get lab in general, that's it. That file upload thing that I mentioned, especially since you're mentioning that. We last time we covered this, it was on GitHub. So I might be thinking of some of those particular with that on the file upload system. So that might be well.
Yeah, I guess the reason I ask is because when I was looking at this issue, it seemed very general, it didn't seem like something that was specific to the self goossens, so
it doesn't appear to be to me like reading. This is sounds like something that could hit pretty much any of them. but yeah, I really don't know because you're raising the question like Yeah, it was just something I kind of
noticed in the meta details of the hacker. One report, I will quickly site from chat. Rudy Mel, says our researchers allowed to test first the main instances that is a fair point to bring up. It's possible that this affects get lab.com as well. They just like the researcher didn't know that because they don't think they're allowed to test directly on. Good lab.com. Are they that would that would seem kind of Weird to allow on their part, I'll just
quickly check the yeah. I've pulled up the the hacker one page to see a testing on github.com Your we are hacker. One.com address must be associated with the testing account. So you can do. So you can do some testing right on the
github.com. Okay, interesting. Yeah I'm just reading that now too. Alright so yeah I'm not sure but basically like it's possible that this could hit get live.com but I'm not totally sure. They don't really clarify that in the report, unfortunately. So yeah, thank you has kind of speculation but
they do have rules. Regarding like, you're testing, you have to take care when testing things that might compromise the privacy of other accounts. So, code execution would be one of those cases where he kind of take care. I imagine a lot of people will just test them like the self-hosted version because it is a bit more straightforward to do
better to be safe than sorry basically. So yeah. Yeah. But yeah this seems like an attack that could potentially it could love.com so like I said that these issues specifically like it doesn't seem to be exclusive to Sophos. Anyway we'll continue issues involve get, we have an issue that affects Homebrew which is a popular package manager on Mac OS very useful. I've used It quite a few times myself specifically. It's in the continuous integration Pipeline on GitHub through GitHub actions. One of the workflows they have set up is this review workflow, which tries to fetch pull request changes and the form of the dip file and it parses it to perform various checks to see if pull request can be approved. So just checking, if like the number of additions and deletions match, if it's only modifying one file and it also does some some other matching pattern matching on them. Data and to do that, it uses git diff which is a ruby library for parsing defiles. The issue here is ultimately in git diff and how it parts for file changes. So what it tries to do is it parses out the destination file path by checking for the Triple plus b slash file pattern for the destination file. The problem is a malicious, pull request can force that pattern to occur by injecting content into that into a file. That when put Into the dip format would match that pattern so that can allow them to change the destination file path on and evaluate if and get the pull request accepted when it shouldn't have been. So you could, you could write like arbitrary Ruby into a ruby file such as the home, brew Cask, caskets and get an executed upon installation by abusing this. This vulnerability, the seems like kind of a unique bug. Ultimately. Like I said, it's not really home Brewers fault. It's a bug in the dependency, but Homebrew, did take some steps to remedy it. Essentially, they just remove the workflow entirely and remove the auto emerge GitHub action. But, yeah, I mean this was ultimately, ultimately an issue in the library, which still hasn't been fixed. I did check the repo and issue has been opened by the researcher but there's been no activity on the issue. So yeah. Who knows when that'll be fixed. I did take a look at the repo and it seems like changes are only made like every couple months or so. So yeah. My Might be a while before that's fixed as a dependency. So there if there's other projects out there, they're using that dependency and using it in the similar manner, then those could potentially be vulnerable as
well. And I mean, one of the main things that they could be checking here is just if they've already set the destination path, like that plus, plus, plus should only happen once per day, if L, put like per file, that's actually going through. It should only see that, plus plus, plus Once. So yeah I mean it seems like there should be a fairly easy fix. I always hate saying that because there's there's always more than what I'm able to see. But yeah, this it's a fun attack nonetheless like to take advantage of the fact that new line editions also start with that plus. So by having a new line that starts with two pluses, you get the triple plus you're able to basically have ignore the line thing gets metadata and change the file that you're writing to to make it thank to your writing to some that it supports getting you kind of that zero changes to to a safe. I'll sew it all Think it's okay like said it's a very unique situation to have some that's quite a parser like this. Like I could imagine other places maybe do something similar but it's very much like in particular how they do it here. It has very damaging
implications So, one thing I found, very interesting when I was reading through the report was they asked the researcher to do a POC by doing a harmless compromise on an existing cast. I a cask, I believe they targeted. I term, this is essentially like pocking a bug in production, which I thought was kind of strange and I thought it was extending perhaps a little bit too much trust to the researcher. Not that the researcher. In this case wasn't a like a good faith actor. And this case it was And the researcher didn't do anything. Harmful it just did like a print or whatever but I'm not sure. I like the idea of getting a researcher to test like that they were basically like, yeah, we don't want to create a cask just for testing this issue. So just hit one that already exists. Nonetheless, like the issue was remedied on humberside, but yeah, I don't necessarily agree with the way they had him. Pok the issue, because if it was a bad faith actor, they could have done some damage with that. Like, it was basically the way an attacker I would have taken advantage of it, just not using a malicious payload,
bad faith, actor could have taken advantage of it without having gone through them in the first place. Yeah, I mean that
that's true but it's just the way that they asked her to test, it is just seems so strange to me. I mean, there's a lot of
risk, if something goes wrong with the test, like in this case it seems fairly straightforward, but there's kind of the risk that they accidentally do. Something does cream or damaged unexpectedly That's where using a test Cask would have made more sense. But I mean, if they were trying to be malicious, I don't think the fact that they wanted to reuse a cask, really would have been where it
came from. Yeah, okay. Yeah. I mean like you said, it's if they really want it to be damaged with this, they would have done it already and they wouldn't have somebody the hacker one report. But yeah, it's just that's the first time I think I've seen that like, kind of process of testing and production happen on hacker one. It's the end just said in chat, I've been asked to test things in prod before as well. So, okay. Fair enough, I guess that's just something that I haven't really seen too much of, but yeah, if it's a fairly common thing that that's totally fair. It's just something that kind of jumped out at me, but it's a bad practice, but it's a practice. Yeah, it's something that kind of exists. Just have to accept it, I guess. So. Yeah, like I said, in summary, the issue is still in the dependency Library, so who knows when exactly it will be fixed. But home-brewed did take steps to mitigate it on their side. All right, so we have the first of two Talos blog posts up next. This one covering to our see, he's discovered in cause Cause re, I think it's cause hurry. I don't know. It's one of those terms that I just kind of have to guess on how it's pronounced cause, or he's smart air fryers, which uses Wi-Fi to allow users to remotely, start and stop cooking. Look up recipes monitor cooking, status stuff like that. Since this is iot, you know, you don't need to expect too much out of these issues when it comes to complexity both issues come down to
potential damage could be, you know, maybe so I don't know if they The best spot, you know, starting a fire with your device, some very real world
potential. I would hope they would have some kind of defense's from being able to send packets that do something that malicious in order to
survive in the current, whatever control some of the like the heating temperatures and self. I mean, I'm sure they have a limit in place. Although I do you control the entire State, you control the entire device, so can they get around those? If you wanted to Yeah, you know
well on the other hand, you're also potentially starting this at an unexpected time. You know, maybe somebody has something flat like piece of paper sitting on top of it or something, it would be her, in attacker wouldn't know that. But there are some potential side effects here. Yeah,
so both the shoes are from the vector of being able to send packets over TCP which the packets are encrypted Jason while it's good that they're using crypto. They're also using symmetric crypto with a static key and Ivy that's embedded in the firmware. So it's kind of flimsy the crypto is there but anyone who can get access to that firmware can just dump it and reuse that key. It's kind of flimsy and it's something we've seen before and I OT like I think we've covered this exact issue in smart locks. They had like, static keys that were in there. That anybody who could pull those keys could just use on any other smartlock, getting to the issues though. One of them was just allowing you to enable developer mode through config packets which you can use to update the firmware without any authentication. It's basically an unauthenticated back door. The other one, was Heap overflowing parts and config packets to to no bounds checking being present when printing a URI to the console when receiving invalid requests, it all. Allocates a hundred and four bytes for that data. So by sending more were using a really large URI string, you can overflow into the Heap and and get code execution again with these iot devices you're there. The mitigation game is basically non-existent. So yeah, I mean that there's not really going to be any
hardening, they don't really talk about the mitigations. I don't think they just kind of get to the crash. That's a lot of the Talos reports. Get to the crashes are proof of concept. And kind of leave it at that. This one's no different. That's it. There was one thing that I didn't entirely understand. It's kind of a minor point but the sinks has a buffer of a hundred and four bytes but they talked about to overflow, you only need a send an additional 60 bytes But the URL and they're saying like and you're some of what you send will be overriding like zi P. At that point. Are not EI. They'll be overflowing into the next thing. Overflow occurs. Execution pointers restore to the last four bytes of the of what you send. Yeah. But 60 bike just seems really small. Considering its a hundred and four byte allocation. It looks like this is around the start of what gets printed out. So I wasn't sure if maybe they meant to say 0 x 60 because it's 0y x 68 bites in total. I don't know if you noticed
that. No, I hadn't. I'm just reading it a little bit to try to Yeah, I'm not sure I'm just I was just looking at the Fatal exception and I can see like they modified the program counter to go to like dddd or whatever. So yeah I'm guessing they just threw like a pay look like a cyclic pattern in there and they were just like, oh, here's where it happened and it's possible. They had a typo there and they meant x60 but I'm not certain there.
Yeah, well, I'm not certain either and it's also the fact that a heap overflowing then they talk about getting the execution pointer. What you usually your? Well I mean if it's calling a function or something what you overflowed. So I think they're now them thing but maybe there is just a step towards dereferencing something in what they in that next buffer that you overflow into and that's kind of work happens. Probably something like that. Yeah, I don't know, it just it throw me off a little bit seeing just 60 bytes and we're talking about a much larger
buffer. Yeah, that is one thing that kind of sucks with the Talos reports is that you are always kind of left wanting to know a little bit more. They just kind of talked about the issue quickly and show the crash and they're like okay we're heading out. It's nothing on the exploit ability angle which is fair like if they don't want to cover that, but it's just, yeah, I really wish they covered that more, but they don't, so it's pretty much pure speculation on that. But, yeah, I mean, the issues themselves are pretty straightforward. There's nothing super complicated. About them. Again, this is another one of those issues where the bugs would still exist, but it might not be exploitable if they had some strong defense in depth with the crypto, they were using and use like a symmetric encryption. I get why it's difficult to do with like iot, but I feel like it's one of those steps that should probably be taken. Just because like, if the packets are encrypted, then the tampering is already made more difficult, right? You would have to compromise. Mais a device which already has those keys or something like that. You can just use like, any arbitrary device and just, you know, copy the key over you could, at least make attacking me issues harder. But yeah, I mean, and iot, it just seems like there's no incentive or drive to try to do stronger crypto. So, you know, whatever, kind of a moot point, I guess it's just how I oh,
yeah, I mean, even just like, the out firmware update, even just having it signed Yeah, something like that. Guy to go quite a
ways. Also I just caught myself and I'm going to correct earlier. I said this is the first of two tell us while post is actually the second of three I guess because the technology issue was also tell us. So
yeah this one I mean it's they have one main report and then the two sub reports
for it. Yeah, so next, we have a follow up with some technical details on the source engine, vulnerability, which we talked about a little bit before when we were talking about valve and their ridiculous Bounty situation. On hacker one, this issue was at least, finally patched by Valve, after two years. And so if losen was able to post a write-up about the problem, it's the rce that can hit victims through Steam, invites on csgo. For those who haven't played csgo, one of the most popular for the Jew. Is the ability to host and join Community servers. Now, that's an action that a lot of players, probably take lightly thinking, oh, whatever. It's just a game server. I'm just going to go on the surf server, whatever, but those games are Miss can actually do a lot. They send files that get parsed by the client and there's communication facilitated between the client and server through the archon protocol and Source engine. Well, one of the things that the function facilitates our sorry, one of the things that the function that it's team invites has is the ability to support the PCH connection string parameter, which can allow commands to be specified upon accepting an invite to join a user. And at least until this issue was addressed. There was no, like indication to the user, that the string was being used, or that there were commands being brand on their behalf, it just kind of happened in the background. So, like what you can do is you can specify an arc on command that gets ran on behalf of the user, which in itself. Could be useful for other attacks like leaking, their IP address or whatever.
Yeah, I guess. I'll clarify a little bit archon is used where it's like the server admin would be using those things within their own kind of game. Console the connection string you mentioned, is added as an argument, like, when it goes to launch the program. So you would use that to then say, like, you know, and run this game, console command that's going Joy them to whatever Lobby ID. That's the intended. Use case here, for just being able to add anything there. But there are con itself is it is kind of a complicated but if you're controlling you told the victim, you know, go ahead and just connect to the server. You give it the IP. That's going to connect back to and trying to send Kate to accept any sort of authentication or don't ask for it. I'm not I'm not sure if they really cover how they dealt with the authentication but presumably you control the server. So yeah, there's no It shouldn't be a big deal to just accept whatever, so that's where they can then make these other requests or give responses such as what you're just talking about with the files. I can come in, in particular, they found an issue in parsing the screenshot response which I'll let you carry on
with. Yeah, so, one of the things you can do, over our honest take screenshots, which are then sent back to the client and zipped form. Since this file is controlled by the server, which would be into attacker controlled. In this case, it's a viable attack surface for attacking the client. They found a stack Overflow in the X Sip and X unzip libraries, which the game or archon, or whatever uses for handling zip files. The Overflow happens due to how they parse the relative file offset of the local file header from the central directory file header. Which then gets used later on when it tries to process that local file header. If that offset is really large like hex, seven apps, and then e. There's an underflow that can happen when it tries to calculate the position and the file stream for the header data. Because if the stream position added to the offset to read is greater than the stream length, it sets the reposition to the stream length - the stream position. Well, if you have that really large value, that becomes the Max on site. and valve for a 32-bit integer, which when subtracted from the stream position leads to an underflow and the ability to smash the stack, Because of the circumstance of the bug, the exploit was pretty straightforward. They literally just smash the stack with Rob gadgets, they didn't even need a direct info link to chain with. Because one of the modules xinput, I believe didn't have a SLR enabled, so as long as that module is loaded at the preferred address, they were basically golden for exploitation, it wasn't 100% reliable but it works. They say it's around 80% in terms of reliability and they actually eliminated that as an issue. Ooh, by making it. So they could have a persistent exploit by binding that arc on command to a key that frequently gets hit by the player in their config. So, if they fail unless the user changes, their configure notices, the bind or whatever, you can just try it again. So the key that they used was the Tab Key, which is like the default key for opening up the scoreboard. So if somebody joined the game or whatever, and then opened up the scoreboard, they would immediately get pumped with the exploit, which I thought was kind of a fun aspect to it as well. Yeah, in terms of timeline as you likely already know, it's abysmal because it's valve. The issue was reported in June of 2019. The Bounty was paid and an initial fixed was deployed in TF2. I believe in October of 2020. Then it was finally fixed in csgo and March 17th this year. So, yeah, it's taken almost two years to to reach resolution in csgo. I mean, it took a year
from more than a year from the bug being triage to a bounty being paid. Aid. That's a little bit,
excessive. Yeah, I mean, it's valve. Yeah, I mean, I kind of already rented about this, but they should just not have a program on hacker one unless they plan to improve that. Because if you're going to leave issue Sitting In Limbo for a year or two years, you're just screwing everybody at that point. You're screwing over the researchers who want to share their work, who potentially want a bounty payout as well. You're also screwing over your customers because you're leaving these holes in place and not bothering to fix them, and a lot in a lot of these cases like, especially Source engine, It's kind of a mean Target you're going to find so many stupid issues and source that are like, like, 90s their exploitation like these buffer overflows. They're not super complex bugs. Oftentimes from what I've seen in these patches. It's literally like a one-line change in the source code, and it's very straightforward, like adding a bounced check and it's kind of outrageous that it takes two years for a one line of code, change tool, and it's just not really excuse. It's in this case though, it was eventually fixed but it's like that's a very low bar.
Yeah, it took quite a while to affect it seems like valve is kind of made some statements that part of this just due to the large number of code bases and teams involved with fixing some of these reports which I can to some extent. I can understand them saying that because like okay it's in this game and that game like, you know, there's There's plenty of different areas something's could creep up and but I would assume that there's still some sort of like Central Library that they're sharing but I might be mistaken on that. They might just kind of fork the library off and then you know, all these teams have to kind of work together to deploy a fixed around the same time. So it sounds like some bad just engineering policies, you know, feature creep, something like that, perhaps,
I'll say, I think that excuses entirely BS because the way that valve works and this has been publicly stated, it's like basically it valve. You work on, whatever the hell you want to work on. There's like, no, like, you can just move between two wants
to work on bug
fixes. Exactly. That's the problem is, like nobody, there's no credit. Or there's no like incentive for people to fix bugs, internally, it valve. They have people working on like skins and working on cases and that's pretty much all csgo. Development is now. So I think the issue isn't that there's like a complex mismanagement between teams or whatever. I think it's literally just that nobody wants to bother it valve and they just don't care and it's it's sad but that's the reality. I think that excuse with like the coordination between teams. It's just a it's just something to try to make. Some sound a little bit better. Yeah, the issues I need to
years is excessively long, like, even places like very large software companies, you know, your big end companies can respond quicker, obviously, they have more mature security process, but I mean, two years for and especially I think when we were first kind of talking about there being an RC, I had some questions over maybe, there was a more valid reason for valve to have now fixed it. But this is kind of clear. It's a clear vulnerability. Like there's no real excuse on it
and it's unfortunate because I've been noted in the past on this show, actually saying like Source engine is kind of an easy target but if you're looking to break into exploitation and you don't want to do like Complex stuff and you just want to fund Target in general Source, engine is probably an area to look for you. And like the hack like valve pays pretty well on their hacker one page but like back then I guess I wasn't really aware of just how bad their program was. Like if I if if I was covering those topics today there was there's no way in hell. I'd be recommending anyway anybody to take part in valves hacker one program as much as a source engine is a cool Target. Like if you want to look at it. Just for the sake of looking at it and trying to do exploits than fine. But in terms of expecting, like a payout and trying to go through the program, just don't bother. Like if you program your your Bounty is going to be sitting there for years. It's just going to depress you honestly, it's probably just going to sap. All your motivation knowing that you put all this work into getting a full chain exploit and And just not getting any like recognition or pay off or it's just it's a waste of everybody's time. Now that said, like they they do have some final thoughts on the situation with calling out valve and the secret Club post saying that they didn't just want to point the finger at valve, but saying they wanted to try to effect change without valve runs their Bounty which I thought I'd shout out since there. But, I mean, valve totally deserves to be called out here. I don't think they should feel bad about making that. At initial statement, I hope that the direction does change on how valve is running the program but I really don't hold out any hope if the past like 10 years has taught us anything with valve, that's don't expect anything because they just stayed, they don't care if they're going to continue making money. A lot of the, like players in these games aren't even aware of these issues or don't even care. So it's like, valve doesn't care either what I mean by, don't expect it to
It's basically this is Wonka's place is a gained where I feel like hacker one could affect change could kind of lead. Any of their anybody that's using their platform towards having a more mature process and actually work towards improving it rather than just facilitating a program at all. I mean, it's great, they're doing that. And I have kind of, I've gone on my soapbox about this a couple times, so I won't do it again. But this is just another example of If we're you know, Tacker one is in a great position to make positive changes for the security and they just don't
I mean I don't know if this is a hot take but I feel like if you're running a program on hacker one and you have a consistent track record of leaving issues unresolved for like years on end you should be banned because
what's a? So I've seen that card you're not the only person that's stated that Disagree, you disagree. I think they should. I don't, I don't want to say no, these companies just shouldn't get to have any programs at all. First of all bounties, like their hacker, one problem that don't even do bounty. So we can ignore that aspect. I want to encourage companies to have vulnerability reporting programs, and hacker. One, makes that easy for them, I'm not going to suggest that we start Banning people from having that making this sort of testing either valve has to maintain it all That's maintained it all privately or just becomes illegal to do this. Testing for some definitions of legal you know won't go into the legal aspect there at least with this. There's a route for researchers. It's not great. That's why I think hacker one should be pushing companies to improve but I'm not going to support Banning people from having a hacker one, or Banning programs from being on there just because they take a while to Go through with it. I think, you know, they included triage time as one of the metrics on like a hacker one page that might help researchers. Maybe decide they're not going to look at hacker one because the triage time or the payout time is so long. I think they could include that information to kind of assist researchers and knowing, but I think banting's the wrong route. I mean, we we want, we want these sorts of vulnerability disclosure programs to exist, And banning them for taking too long is not, it's not friendly to the companies that all for that.
So let me elaborate a little bit on why I kind of would want like valve to be banned. For example, is whether they're intentionally doing it or not. What this is what this practice essentially does is suppressing research because Your your kind of leading researchers on on a string saying if you don't talk about, it will pay you out. It might be five years down the line but hey, we'll pay you out. But don't talk about it because then you'll be banned for from our program, right? That's that's public disclosure and and, you know, we want to be able to fix it first. It's like it's almost actively hurting the research side of things because nobody's ever able to talk about their their research at the issues they found. And it's like I want to say like buying silence but it's not even like they're buying it because they're not paying out for like over a year. It's like they're just using promises and what not to silence researchers and that's why I think it's actively harmful that's why I would kind of want to go that route. A Banning them off the
platform. People as the researcher. If you wanted to give up that payout, you could go ahead and start talking about it. Oh, like said, they're buying that silence, they're paying the price that's acceptable to the researcher.
Yeah, I guess I just I don't know. Like, I feel like having a bounty program like that we are wasting. Everyone's time is worse than just not having a bounty at all. But I guess that can come down
to. I mean, their Bounty programs. That don't like you disclosed at all to, I mean, plenty of the hacker, one activity is, you know, non-disclosed
reports. Well, my problem isn't specifically that they're not allowed to ever talk about it and said, they're not allowed to ever talk about it. And it's not like a never even gets fixed anyway. So, Like it's the worst of every World, basically. So, yeah, I mean, I guess it comes down to a difference of opinion on on whether you think the running the Bounty like that is actively harmful. In my opinion, it is, I guess in yours, it's not. But
what do you make a fair point about the harm that it does? Like, I don't actually disagree with you on the fact that there is some harm being caused by the By them, delaying disclosure for so long. And by kind as he said, just drying it out. Like that is a very good point. I'm just not sure Banning a program is the right route to go at the same time I was saying like I think hacker wants to be pushing companies towards having a more mature process and off that banning could be part of that. So I guess I'm not I'm not entirely opposed to Banning programs, I don't know. I mean that's kind of why I'd rather maybe push things towards being less promoted, less seen or making researchers aware of these issues up front so they can decide whether Not, they want to spend the time on it rather than just getting rid of it because I do want to see more of these more programs to just report the vulnerabilities. And I don't know, Banning the bad one. Just feels feels like it's going to be in the wrong direction but I guess But your point is valid. I don't have a good response to that and I don't entirely disagree. I just, I don't know what the solution is. I just don't like the bad
idea, so Banning might be going a little too far. But yeah I think we can both agree that there should be some kind of steps taken to whether it's negative consequences or positive, you know incentives to get companies to try to like remedy there their In there. So that companies like valve aren't just getting away with this kind of crap with the way they're running their Bounty program. I will quickly both in Chatham. It's the NS would hacker one band. You if you broke disclosure from what I've heard, generally you're only banned from that program. I don't think they ban you site-wide that said, I could be wrong. There could be some cases where people have been banned site-wide, but like, that's kind of anecdotal, right? I don't really do stuff and huh. For one. And I've never tried to like, break it as closure policy to see what would happen. But from the cases that I've heard from, like people that I've talked to generally, they're only banned from that program, which I think is the better stuff to take. I don't know if I would necessarily want people to be banned, site-wide for violating one program because in cases, like this with valve, I think you could be justified in some instances for for breaking away from it or if like the companies, like we don't think this is an issue where in a close. This is X and then you decide, okay, well then if it's not an issue, then I should be able to talk about it right, which I support. So, yeah, I mean there's a few different angles to it but yeah, I think everyone can agree here. That valve was just kind of needs to be able to change something, which I don't pulled out a ton of Hope for it. But the actual issue is pretty cool. I think and the lack of needing an info leak was a really neat angle because that is one thing with Source engine. And you can find memory corruption bugs, fairly easily and Source engine. Just by doing like, basic fuzzing on like certain subsystems. The part that makes it really difficult to get a full chain is that you often need an info leak and that is a lot harder when you're talking about Source engine, especially remotely. Because Just given the nature of how you're compromising things like map. Files is a common one, popping a client through map, files, info leaking through, that is really tough. You don't really have a lot of options. So yeah, being able to skip around needing a Nova leak is what makes this issue that much more impactful, right? For one thing, it's one less thing in the chain that could potentially, you know what? I was going to say that could potentially get burned, but it's fouled stocking cashed. Anyway, or It wouldn't have gotten patched anyway but yeah, just requiring one bug is is something you don't always see with with Source engine and targets like it. So it was cool to see that. Anyway, we can move in on to our next game hacking topic, which is the 3DS. Yeah, another hacker one. Yeah, another hacker one report, this was a heap overflow. When decompressing received levels on Super Mario Maker through StreetPass, which is that feature. We're like the 3DS can communicate. With nearby systems so that you can, you know, play with somebody in your house or whatever. But yeah, I know, zi your into Mario stuff so you must have found this one kind of cool.
Yeah, well I thought it was kind of coax out, the Expo was kind of cool back. That's Mario maker is just a nice touch, I guess since at least the fan although I haven't I don't have a 3DS and played the 3DS one. Nonetheless. I think the vulnerability here is when, if we Steve's a compressed level. As you would I mean you would expect it to be compressed. It's it'll receive that it will be a maximum received size of 0, x, 18 thousand, bytes But it doesn't actually parse the size. So what ends up happening is as it goes to decompress, it goes, looks at the chunks, he's like a chunk header and decompresses, it just has this buffer 18,000 as another buffer that can write into. So zi compressing, the header seems whatever opposite side into the decompress into that decompression buffer. And then since that buffers immediately following the other one, as it's looking for, the next chunk, it just want to look at that header, which is the decompress that and just going to be like, hey, is this compressed? Yes. Okay, let's keep decompressing and move it on. Oh, it's just kind of an interest rate. So, obviously with that, because if you basically have it like compressed inside of the compressed file, so two layers of compression and that second layer will end up being written like 18,000 bytes. So off the end of the day, actual destination, Buffer Road basically just keep going until it can't compete. Press anymore just because it doesn't actually check that size. So I thought that was just an interesting attacked by layer. The compression in that way to get it to decompress. It kind of depends on the fact, those two buffers are right next to each other that decompresses into the exact next space of memory from the original buffer, They don't really talk about how you could gain code execution. Just, it's a, you know, 18,000 bite. Well up to 80,000 fight over right on the Heap. You've got a lot of control action as we're just talking about with the last one. A SLR of might be an issue here. I don't know what the mitigations are on the 3DS. Do you know anything about that Specter? I know you've been at least more in touch with the console stuff, but I don't think 3DS is really up your alley.
So, I have looked a little bit into the 3DS security, I mean, I haven't like gone digging into 3ds code or anything, but I remember seeing the CCC talk, given a bit for you security. I don't know specifics on which mitigations are in place but I believe it's pretty strong as far as supper next, we do to
chat just mentioned that. There's no aslr on 3DS so that does make things easier. If there's no way a SLR oh,
Because I was under the impression that it did have basic mitigations but actually yeah I'm looking up and some other items are mentioning the fact that no there's no aslr. So because this one was talking about savegame vulnerabilities, which is another area. That's interesting for consoles but it's often dead because ASL are pretty much kills all those issues. So
yeah, like I might be able to exploit. This one report doesn't go into our for pork. It's the crash as a fairly big old reg. You control the Data that's being written. Because you control, what's being decompressed? So I would imagine you're probably able to get no right here. I don't know on how you would necessarily the type of grooming that he'd be able to do to control. The Heap, might be a little bit challenging. Like if you're trying to get a victim here, I don't know. Like this also isn't really the type of thing. You're like, what are you going to do, you know, compromise, that ten-year-old walking down the street? Like, I don't know how likely, the It says to really be attacked. But
well it would mostly be like a self attack but well yeah that's really need ass like two devices to be able to do it. So it's even not ideal for that
because I wasn't really thinking of like doing it for
Homebrew. Yeah, I mean, well you could do it for Homebrew but there's better options, because yeah. Yeah, this one's just not super ideal for either case for getting somebody else we're hitting yourself. Yeah. Her
regardless like I think the vulnerability still kind of cool like said just be compressing and then it will try and decompress the decompressed data just by lock, but I think that's still kind of
fun. So like you said, not too many details, it's a limited disclosure, which its Nintendo. So that was kind of to be expected. Researcher did get a bounty of a very specific amount. These two thousand seven hundred and thirty-eight dollars down to the last dollar, which is kind of interesting. But yeah, I just thought that was funny. But overall. Yeah, I think the issue is cool and it's always fun to talk about fossil stuff and especially portable consoles like 3DS and switch. Our have always been really interesting targets to me just because Homebrew is a lot more attractive on those platforms. Whereas unfortunately on something like the PS4, it really is just a glorified worst-performing PC. So Homebrew is not really that fun, unlike PS4, but on, 3DS this stuff, it's a really, it's really interesting and like Vita, for example to so, there are
ads, uh, Polly conventions. PSP, original PSP. I, I did a lot of home brew on
that. Did you? I didn't know that actually,
yeah. Well, that was back in my teen So, Good Times.
Cool. So we'll get into our last Talos issue which was a report on an out of bounds right in a 3D printer slicing program called prusa slicer. The vulnerability is in the dock object file format parsing. Particularly when it comes to personal objects with an in valid number of vertices because an object, an object file is essentially a flat file which describes Vertices that Define the shape of an object. So it's, it's not quite an
invalid number. And I have a theory about why this code kind of happened by. So the thing is, they support officially, they support code that. You know what these objects will have either three vertices. So a triangle or for, like a square rectangle, type thing. Their code is written assuming for like a lot that seems like the code here and the vulnerable code just Assumes, it's for and if it's a triangle, the last entry will just have a minus one but they actually do a check for like invalid, vertice count, I think it's towards the end. They talk about checks for Ya Face. Vertices is not equal to 3 or not equal to 4. It gets rejected. So in theory, they're supposed to support these ones with three, but the actual vulnerability seems like they just end up assuming everything's golf for it. Did actually take this into account. Like have they not included the not equals to 3 check at the end? This one, two been vulnerable. But sir, you can go away without actually explaining the
vulnerability. Yeah, so I mean that was good to clarify because this issue was a little bit tricky to follow for me because of that aspect. But yeah, like prusa slicer and most things only truly support polygons or triangles and squares which are essentially like two triangles and these sets of vertices are passed in a facet which contains four vertices slots. So when it parses these vertices in expects them to be in groups of four, like you said, it kind of takes that angle of Negative 1 to the vertices that are actually use. So, what happens I believe, if you have a total number of vertices, that isn't a multiple of 4 is like, it'll check the modulus of the number of vertices by four against one and two, but it doesn't check three properly. So if you pass an odd number versus, you can end up with this out of bounds, read situation. And I believe this, the reason for that is, if you have a vector of multiple of three elements, The Code and low Dobbs will unconditionally Ali Loop through the next three vertices. So when it comes to looking for that last virtus, I believe that's where the out-of-bounds read comes into play. It was a little bit
confusing. I know that's how I understood it to it. It assumes they're always the four vertices there. So then the fact that you have the one that isn't because like the code in theory, accepts that. But like said it, accepts the three. But then the codes actually written, as though it only has the for So yeah, it does that three get you an out of bounds by one or off by one
issue. Yeah that can then lead to an out of bounds right later on when it goes to grab the next facet and operate on it. This seems like it would be a tricky bug to exploit. They don't really talk about the exploitability. Like like we said, Talos doesn't really touch on that stuff. So it's hard to say how exploitable this would be without knowing a lot about the target. Why I think it would be tough as because Heap spraying would be very difficult in this situation because it's a parser bug. So yeah, I mean, I don't know exactly if you would be able to exploit this, maybe you could, but it's hard to say, without knowing
ya, forget we need you need to spend more time on it, just leave it, add don't know. The
code itself is very let's just say sloppy because what I was trying to understand the issue and reading the code, they do a lot of a lot of stuff in this. Lope and they increment the iterator a lot and like different places that is essentially the basis for how this vulnerability happened is because you have these different areas that can increment different like iterators and your counters that then get assigned to the like this. So much operating
always worrying when something messes with its own iterator from within the loop like it definitely happens. There's Fair reason to deal with but it's one of those areas. Has that ends up getting messed up a lot
that's perfect. That's all you need to know really, it's no CFG. I was kind of disappointed because Talk about the successful exploit strategy needs to buy passivity and In fairness, you know, a library that doesn't support control flow guard. Using that like that's a fair bypass. But when I read that I was kind of hoping they'd have like an act. Like they didn't have talked about how they dealt with CFG rather than just, we didn't need to because of this, it's fair, but I was definitely disappointed because I was hoping for another right up and dealing with that In fairness like with love The CFG stuff like it. It's not a fine grain CFG, I guess it's basically, you have to call real function. So, like, there are viable gadgets. It's not like you can't do anything with it. I just like reading. Kind of how people are going about building up their gadgets with the whole functions like how they do their analysis stuff. So I was looking forward to that in this one and didn't really get it. So yeah a little bit disappointed. Ultimately like it's fair game like to go and use some that doesn't have like why would you do it the hard
way? Yeah it's Vienna. I just saw your Lincoln chat somehow that I didn't see that but I'm just taking a quick look at it now and we'll definitely do like a shout out of it and this episode which we'll get to in a little bit. So thanks for thinking that because yeah somehow I missed that but it looks like a very cool project. For now though we'll continue the exploitation stuff. And yeah, I will say though like summarizing this Exodus post. It was I like this post. I like how concise it was and to the point on the It's tough. I feel like they covered The Sweet Spot of background info without feeling like it was bombarding you with it. So yeah, I think they got that balance perfect of like yeah, everything's good right now but not too much. But yeah, the exploit Strat was just stacked. Pivoting, using a gastric met Library, using a small rope chain to write shellcode. And then yeah, virtual protect using virtual protect to run that show code. So we'll get into our last exploit which is a Linux kernel, bug detail by zi zi and IOU ring, which is a newer sub system introduced in 5.14 doing performant IO by bashing them together. The reason that it's more performant is because it basically allows you to use one syscall to do multiple I/O operations, instead of using multiple sis calls and says calls are pretty expensive in terms of performance. So,
yeah, for even getting it without assist call, if you use the kernel polling call, That is not something that's exposed to unprivileged users.
Yeah. So what are the operations? You can perform on IOU? Ring is the I/O ring op close-up which cues a work request to close the attached file. Now they say the problem is in the closed operation as far as I can tell, and I believe as far as you can tell to zi, when we were talking about this a little bit, the problem seems to be more in other paths that use that file not in the Closed path because
you can argue it both ways and I kind of did this when we were talking about it. But I'll so I guess before we dive into that the issue itself into just it doesn't increment the reference counter in a couple areas when it grabs a reference in a single-threaded run this FD. Get will just grab the file, by won't actually increment the reference. Reference count and the io graph files. Same thing. Grabs the reference. So then I would argue that the issue is with those, not grabbing the ref count are not incrementing. The rev counter rather than I/O ring closing it because I thinks nobody has it, but I could understand the argument being that Io Io ring up close, should be aware of those cases and account for them. I don't know. I mean maybe the developers would agree with that. I feel like it's I feel like the blame is on, not incrementing the reference. Counter though. I feel like that just you should be doing that when you grab a reference, there shouldn't be these exceptions because I would sing all threads or
whatever. Yeah, it seems like a really weird like micro optimization or something. Like, it's not like incrementing a rough count as particularly expensive. So I don't know why you would just be like, oh, it's single thread, screw it. Let's not, let's not grab a reference, like, I don't know, it's weird. But yeah, I mean, the issue is that that ref count kind of gets D synced, so because you're able to Decker. To while it's still being in use you can cause the uaf anti details. Some of the paths that can be taken to abuse that use after free. I feel like this right up could have been done a little bit better. There was some areas where I had to I had to personally go into the Linux kernel source and try to look at the relevant code to try to figure out the issue because like they talk about the io grab files path in the zi blog post, but they only mention it. Like once and in the snippet like they don't actually talk about how its kind of intertwined with the other stuff. They talk about. They do have a table at the end with the exploit timeline, but unless you're intimately familiar with the IOU, ring subsystem and ppf. It doesn't really mean much to you. So, yeah, I mean, I was a little bit disappointed with how it was written up, but the issue in itself is, is kind of just comes down to not managing that reptile ref count properly, and yeah,
Actual exploit here, like a comes down. They get that ref count. So they end up talking about the exploitation this map, and look up element and map update element functions, effectively, because of the dangling are not dangling pointer on because the fat rev counter it gets closed while it's still has. So yeah, I guess it is a dangling pointer. Basically, you end up controlling the values that go into a copy to user. So, Basically, you're able to tell it to read or write any kernel memory. I think I'll do a copy from user with one. If one of these rap copy from user I believe in one of them and then obviously there's cop to your so very powerful primitive elf. This when you've got the After it's been frayed and you've got control over the valley.
It was so, it uses Let me just, I'm just trying to double-check something in there, right up? Sorry. So yeah, the the arbitrary read and the arbitrary right? Are in two different paths, right? Just trying to verify that quickly. What size are they? Just talk about the arbitrary read.
No morning to looking. So the ppf copy key is a rapper from the copy from user. So that's the one that I
was Right, okay, so I see. So when they add a key, they can get that path of getting the arbitrary, read through the controlling, be arguments, copy to user. The other path is the update element which has the arbitrary, right primitive, which I believe is for like the message handling. So, the reason I bring that up is for those, not aware, like those apis for copying data to and from userland are kind of smart, it's not like you can just pass a colonel pointer for like the death. Nation and copy to user. It will verify that it's actually userland address, so you can't abuse that for both. An arbitrary read and an arbitrary, right? You could only get an arboretum there. So that's why that other path is needed for getting their betrayed. Right. But yeah, once you have both, I mean, it's basically game over in Colonel, all you have to do is smash the colonel pass struct and you're done. So
yeah, I'm sure that like, it seems kind of like an interesting vulnerability but I don't feel like that's right. It off, does it just
this? No, and I all you're doing is kind of tough to because when like I was trying to look for the patch for this issue to try to better understand it and unfortunately like because the subsystem is so new. The the movement that's happening in the code is is huge. So very active. Yeah. So it's hard to really even be able to death and kind of figure it out that way because there's so much code changing. So, Um yeah I wish this exploit cover the issue a little bit better. But yeah I mean these rough count type issues are one of the most popular sources of Colonel issues because it's the only thing it has to make sure that uafs don't happen. So yeah, that sums up all the exploits. We do have one research post. We wanted to cover this week and it's the awesome net Williamson post on sock. Fuzzer, from the P0 blog in here, he talks about fuzzing setup. He wrote And deployed called sock buzzer which targeted the xnu kernel networking stack. Colonel fuzzing is always an interesting topic because Colonel presents some unique challenges when it comes to fuzzing such as crashed is being lot. More severe deal with and a lot worse to recover from coverage is also harder to manage. You have like coverage flaking issues and whatnot. Introspection in general is just generally a lot harder to do, plus the state is very complex. You literally have all the processes and It spreads on the system, sharing the same kernel, State, some degree, debugging kernel is also a total nightmare. I hate it to the point where, if I'm ever trying to, like, figure out a colonel issue, I will literally just modify the source of the kernel and rebuild it, which isn't always an option, especially if you're doing like, Windows or whatever, but debugging kernel is just so painful. I don't even do it, I just don't bother, but yeah. Like there's also a relative few amount of projects that are open source to Target. At the colonel you basically have says caller and KFL so there's not a lot of like public resources on the challenges that come with Colonel. Buzzing one of the things that Ned talks about here is the different approaches when it comes to Colonel that you can take being doing some of the unit testing type work on code in the kernel directly or doing like what's this caller does and doing the heavyweight syscall indication and he mentioned some of the trade-offs involved with that like with accuracy and reproducibility and The amount of overhead, the manual overhead this involved on fuzzing because like what he does here is he pulls the kernel code into userland and fuzz it there which we talk about that idea a little bit before, maybe not specifically with Colonel. I think we talked about it with trustzone applets but it's the same kind of idea, right? You're pulling the code out of something that's expensive to get to in terms of performance and you're just putting it user land. So it's faster. And it's you have more control over the Code and you can more easily like rebuild it and whatnot. Yeah, and I think we
talked about the micro fuzzing was at the paper
I think. So
it was doing like specific Java functions I think and we're both saying like you know we've seen this before but this is the first time, we're singing named go to
it. So micro fuzzing is a bit tricky because I think we've seen it used in two different contexts. We've seen it used to fuzz patches and we've seen it used in this context of like pulling things. User space and puzzling it there. So it's a little bit weird because I think we've seen it used both ways but I think you're right I think micro fuzzing was the term that was applied by one of the topics we've covered before. So yeah like it's a cool idea and it does have some benefits but there's always trade-offs and and some of the other challenges come with it and that's kind of what net goes into detail here in this blog post on and one of the things sorry to
interrupt, I'll just add on that. Was episode 29 where we talked about Hot Fuzz, which did dust vulnerabilities through micro fuzzing, that was like, I think they're targeting for like it. Well, anything that did resource consumption, but we ended up talking about that concept a fair bit during that
episode, Yeah. That was a long time ago any night. But yeah, he talks about some of the trade-offs such as having to fake functionality to match the colonel API. Somewhat, he goes deep into that process of Faking functionality talking about how you use asserts to know when functionality was missing and had to be ported over. He also used the certs to block off privileged code paths, which I thought was cool. And the idea behind that being, if you like, trigger in a search for crash, if you're using like code coverage, Guidance then it's not going to get any coverage pass that assert. So it's not going to spend time or a lot of time trying to hit that path. So yeah. I mean there was like some really interesting tricks that he use in this
blog post. Yeah, I appreciate it. A lot of the overview like in a sense like using the starts off that like it seems obvious especially if you've done it before, he done something similar. Like, it seems obvious like okay, this is how he go and do it, but he's laying it all out there, especially for somebody who maybe hasn't got anything. I haven't done a lot for the other. I learned plenty here to Of not now trying to come across. So I'm done everything and all of that but this was a great. Yeah, I thought this was a great write-up in that sense. Just because he did make the Assumption, of course, I just did this and just brush it off. He covered what he did and like, there's a lot of value in that
information. Yeah. Near the end, he also goes through one of the findings of the fuzzer which was an IP 6 and buff double free and how he went about triaging that and writing a reproducer, going off of the information he had from the fuzz or hero overall though just a really solid post. I think this is probably my favorite post that we'd covered in a while. Not only did it have like a lot of like interesting tricks and like you said, like tricks that might be obvious as someone who does it a lot but could be very useful for somebody. To break in but it was also like a really inspirational post and it could give you some ideas if you're trying to fuzz a Target. Even if you're not trying to fuzz Colonel or fuzz X and you there's definitely something to be taken away here and applied to other targets. So yeah. I mean this this was an awesome post. I loved it to be honest. And yeah, it talks about a subject where there's just not a lot of new information. Beyond what I mentioned was says caller and KFL also worth shouting out, the project is open In Source, you can find it on GitHub near the end of the post. So if there's anything that you want to look at potentially taking out or just getting some clarification on by looking at the implementation that is available to you. So yeah that just makes it even more awesome so we'll move into our show notes section of the show first thing we have. So
first one I'll actually it inject the tool. We just mentioned a couple topics ago so we don't forget to bring it up. Besides At this tenant, I'm probably saying that incorrectly but looks like it is a trace Explorer for Ida so we both just found out about this on episode so I have not used that or had the time to really look into it but it does glancing at it while I'm talking. It does look interesting. If I were more fan Ida user, I would find it probably quite useful to be able to take the trace and see that within Ida.
That is a functionality, I have wished for before, definitely. What I was doing like PS4 stuff. I was like I wish I could just see like how this paths being taken like,
and yeah, that's where things came from. How got their army gives you that in a sense it's Kind of like time travel debugging. It looks like a subtract registers to own age. I'm just looking at this one, screenshot that looks like it's staying the registers. Well, I guess, yeah, I could do the assembly there to figure that out. So that seems quite useful. I wish it's on bin. Jia might need a port. It
we gotta show some Benja. Yeah. I mean, especially if you're working with the Target that Not have source code access, although can't you can use this on Black Box targets, right? I just want to make sure it's cross platform. Yeah. As far as I can tell, all you need, is the like binary and it?s. Oh,
what's a using for the actual Trace? Like does it have the what type format like he's asked way expect like the until L PT or
something? As far as I can tell, I don't see anything suggesting that. So this looks like it could be useful for doing like a black box targets.
Yes. So it does mention here, how do I record an execution trace? This doesn't do that. You just need to provide and says, you'll have to use Dynamic, binary instrumentation Frameworks to generate a compatible execution
So they have a FAQ file here looks are they have a tracing? Read me. Sorry. Which probably includes more. Yeah, includes more information about the trace format so you should be able to convert things into it. Yeah, that makes them more useful. I'm glad they don't like rely on. You've got to get the Intel PT version or something or relying on having that
actively. Yeah, tying it to a technology like a certain other project that we've mentioned KFL. But yeah I mean this this looks very cool and like I said it's something I can wish for before and it provides a very useful level of introspection. So this from the people over at the awesome people over at Red two systems. I think this is Marcus's repository. But yeah, I mean this this looks like a very awesome tools to thank you for bringing it up. I'll definitely Show De Mer people
bags, link to search here.
So yeah, yeah. Alright. So we also have some other shout outs. There was a new design that went out. Temp dot out. It'll
be an interesting sign. I see a
sign. Okay. See I think this is another one of those things where originally, I used to call use. I. So this is just another one of those areas where it's just that pronunciation. And carrying over. So, yeah, this will be an interesting read for some of you, I think there was one article that I thought was funny in there, which was fuzzing, or Dari 240 days, which was a really, really dumb python fuzzer That Shook bugs that have heard Ari. I mean, it was so dumb that you could probably just have piped data from Dev random into it about the same issues. But um yeah. I mean there's some cool Topics in here. There's also some other sub 2 focuses more on like the area of I'll wear. But it's always interesting to see new zines or ziens or whatever the come up. So we wanted to give that a shout-out.
Yeah, it looks like their focus is little bit more on that malware davor like the VX stuff. Marley's that's kind of what they mention they kind of call back the vxe. And so it seems like that's what they're going for rather than like just exploitation. But obviously there's a ton of overlap and odds are you'll find something interesting and even if Maybe not quite into just virus
development. And Amy just mentioned a chat. Like, she's literally just catted Devi, random into our to infectious. So, yeah. I mean, that's a look like particularly well, written code. And just, you know, it's always fun to talk about people who still use those. People who are like, you're not a real hacker. If you use Ida, real actors only use for dark
what now there's like it's a risin or I zi anti thank or something. There's a new Arc of
R2. Oh yeah. I remember seeing something about that but I'd look into it too
much. Yeah but us because I think cutter openly supported that. Now instead of our to
does that make it is
terrible. Well Carter was at least kind of a decent-looking you I I'll give it that
much. It looked nice but it didn't work, it's
useless. Yeah I bet I was basically a trophy. Yeah, I've joked about it before but like my are to experience was I want to say like Defcon Qualls in the 15. It had our to you could run a web UI on it. So I was playing one of the CTF challenges on my phone while I had to go shopping and I was waiting in line for something. That's how I fused are to. What's the kind of keep myself going like that beyond that. I haven't really used
it. Yeah. Apparently that project was spawned over code of conduct stuff according to Amy so and yeah, somebody mentioned cutter 2.0 as far as a top mentioned cutter. 2.0 was released recently. Yeah. I mean, I don't know if I'll try cutter again. I gave cutter to Chances already. And if I tried to load anything bigger than a pebble, it would just crash. And I was like, you know what, this is a waste of time. Not even bothering, or so maybe that's situations been somewhat remedied. Specially, if you don't use, the are two option. But like, yeah, I don't know if the crashing issues were specific to Cutter or if it was just our to couldn't handle anything, that was bigger than, like two lines of code. So yeah, I don't know who exactly depend the blame on there for the crashing issues, but it's just one of those things where I've given it a go, didn't leave a very good first impression. So just like, I'll just use pinja or whatever like it's better, it has the compilation and stuff. Now, like I just there's no reason for me to You use anything else now this point? So yeah, so we'll move on to our next shout-out which is team star and seguso put out there Phoenix jailbreak, Colonel exploit for iOS 9.3. .5 this was kind of a pretty old bug. I mean, nine point three point five was years ago obviously, but the exploit code is new. So yeah, I just wanted to shout it out, I don't know. Specifics about the bug, they In the bug was found by an assault group. And again, beer, it seems to be an I/O kit issue, just at like a quick glance, which is a surprising. Because at the time that like I/O kit was kind of the bug Farm of the day I guess or the bug farmer of the year. I don't think it's really being looked at too much anymore because I think apples put a lot of effort into killing that attack surface off. But yeah I mean for anybody who's looking for like new exploit pox to look through this might be of interest to you especially if you Want, you know, a bit of something that's been interesting in terms of history of what was used to jailbreak iOS in the past. But, yeah, I mean, it won't be totally relevant to today's exploitation because iOS security is very fast-moving. So there's there's been stuff added like Like wow. Why can't I think of the name of it? That's pointer authentication. Sorry pack. Yeah so like this exploit probably wouldn't be resilient to pack. So yeah, I mean there's a few things that would make this totally relevant for today but it still might be interesting for four people up there. Our final show is experiences with Apple security Bounty from Evil bit. This is where evil bitshares their experience with apples security Bounty program. It's mostly a meta post. It doesn't really go into like technical details of a specific bug, but it more just give some insights into apples program and some like teks but also some phrases like saying how their submission process is pretty smooth. But yeah. Like I just think those insights might be interesting to some people out there. Yeah, I mean not too much to cover technically, so we left it as showed out, how can you compile and execute C code on iOS, doesn't it have some serious overhead with sandboxing? I mean yeah I'm not totally sure with iOS. The common thing I know with iOS, if you're hitting it, from the browser is once you because of webos M or n jet or whatever, even just like abused it memory to just run assembly yourself. You don't even need to worry about see so if you even if you can't run C code directly, you can just write C and compile it down into You know, well actually, since you can just run assembly in the gym pages are you can just run the assembly directly there. I was thinking you could compile down at the Rock chains, but you wouldn't even need to do that. So yeah, I mean, maybe it's not super easy to run code on iOS compared to maybe like Android or whatever, but you can definitely do it, you know, with enough effort. So
I believe you can still call See from Objective C. So you should be able to are you in theory. You could probably build an app to for some
effect. I think you can get some exploits have done that. They require you to self sign the
yeah that's well as quick as like Distributing that is going to be a bit more difficult but at least for your own exploration, that would be an option. Yeah I imagine like actually pulling it off. Somebody worked going trying to compromise it's just going to be compile down to payload that can be injected from like the userland
exploit Yeah, I don't do iOS stuff so I'm not sure what kind of restrictions they would have on like the C code that you can access from Objective C. Because yeah, like the the interop layer there is something that I'm not really familiar with, but yeah, like that's another option is just using your own app, kind of like what people do on Android for testing, although Android is a little bit easier because you could just upload a compiled. See binary over a TB and do it that way. Obviously with iOS because Apple hates being able to do anything on their devices that they don't like, that's not so easy but I imagine if you're doing that testing often, most people are just probably going to be using Corel Liam. Anyway, if that's kind of your main career Focus as iOS, then you're probably going to have kirlyam which I imagine makes it a lot easier to do that kind of stuff anyway. So yeah, that aside that pretty much concludes all the topics we have for this week. Week zi, you don't have any like last-minute thoughts or additions, they
do. Okay. Alright, cool.