Episode 76 - Transcript

Fake Vulns, More Valve, and an AWS Cognito issue

< Back to Episode Overview
This transcript is automatically generated, there will be mistakes
Specter
Hello everyone and welcome to another episode of the day Zero podcast. I'm Specter with me is zi in this episode. We have a few valve issues so that'll be a nice follow-up to a couple weeks ago and we were Hammer hammering on them a bit. We also have some fake bones AWS and Qualcomm bug. So first before we get into those topics, I will mention will be going on our summer. Break soon, May 24th will be the, our last podcast episode until September 6th will still be doing some Some streams and some videos going out. Like we're not going to be stopping content completely, it's just the podcast won't be running during that time. Oh yeah, just wanted to give you guys a heads up with that said though we can get into some of our topics this week. So first we have kind of a fun issue with that that you spotted zi. Yeah, fun. And also a little bit old this. Apparently the paper behind this was published at the end of last month so we are a little bit. Late on it. But yeah, it's insufficient input. Validation in the Marvin Minsky's, 1967, implementation of the universal turing machine. This is a CV. They were issued TV, 2020, 132 471 and Oh no, I think this is probably one of the oldest bones that while definitely that we've covered you think of anything. I can't think of anything that's in its holder. No, no, I'm good. What? 40 50, 60 years. Come and gone. Yeah, yeah. Now I think that might that might take the cake. Yeah, I mean, were saying Vault, I mean, this is just kind of, I think a good example of something issues with Is. I mean, they're calling this a arbitrary code execution on, you know, for user can provide data, which is kind of the point of the turing machine, the ability to run programs. Like I kind of saw this. I was saying you up and I'll just submit one for, you know, python the ability to execute arbitrary code, if you can provide file to it. Yeah, it's like the return of that that section that we used to have like not CD. Yeah. So it's a yeah, it's cool to read this at that but maybe that's it. Obviously this is a little bit, just, it's meant to be humorous. Like say did a paper on the intrinsic propensity for vulnerabilities and computers the using this as the example. It's not a long paper it's I'm hoping it's not serious, I read In a very non serious way, but it's still kind of funny that they got the CV for it. Yeah, it just seems like a bit of fun. I wouldn't take it too seriously. And it's not like the vulnerabilities, we typically cover, but yeah, I mean, not much just say they're being able to run arbitrary code. I mean, you know, code bad, basically. All right, so with that, we can move into another kind of fun topic, kind of serious topic which is a detecting, an annoying, burp, users, so zi. Remember you said something about this reminding you of Def con talk, right? Or a Defcon Workshop. I forget. Which one is yeah. Def con talk back at def con 21. There's a talk defense by numbers making problems for script. Kiddies and scanner monkeys. I'm so, yeah, a little bit about me. Sorry for the people listening, who just heard that, I just opened the video. Yeah, I remind me a bit that talk that talk was folks on like the use of defensive status codes to basically screw with scanners. This topic, not quite the same, it does talk about kind of breaking burp, things like the crawler, and the active scanner and some of its functionality being able to break that. So there's a relationship there in general, I will say like your security of like your company or whatever should not depend on detecting, burp users. Like this is something you generally don't want to actually be doing but it is still kind of fun. You could do you? I'd mess way, you know, the pen testers or whatever, whoever's using it. You know, script kiddies who are looking at your production website you could mess with them a bit. A few interesting ways to detect are not terribly interesting looking at stuff. Like if on port 8080 the default Port that burps going to be listening on, you know, checking the fav icon. There there's a little bit about detecting the TLs men in the middle. So basically loading or trying to make a TLS connection from java. Script in the browser, you know, seeing if the issuer supports worker again, you like the last one you can figure a way from that. And just has a few things like different features that it'll add all our the bird. Browser extension will add event listeners or support for the sometimes. Compression just has a few things that you can detect it based off of I thought the breaking was a little bit more interesting. Notably that apparently internally. The burp crawler use a CSV which stands for cheese separated values. and apparently, you know, if you end up, including a cheese remote with any or Like within 1/8 raffinate link, that'll end up. Ganga are it'll throw with the site map as it gets added. See, he can basically create problems there. Similarly, there's this little s character that's used in the Intruder gets replaced with an a at some point. So you can detect if burps being used that way and kind of break the Intruder that way. So just kind of a couple fun tricks. They're getting. This is nothing really earth-shattering. It just the thought of doing it to mess. With somebody testing is a little bit humorous, on the other or one of the last things I will match. They do also call out doing things like worrying. If you end up seeing like a DOT birth, collaborate collaborator dotnet here, all you should try and query those Viator, maybe on a random delay. Just to trick the scanner into thinking. It got hit adding various scanner expected responses into the comments and things like that. Oh no. It's just generally some fun that you can screw with the testers. With like I said, I wouldn't recommend doing this for real but it is fun to think about nonetheless, maybe this ETF users. Yeah, yeah. I feel like I couldn't tell you when exactly, but I feel like we have tangentially talked on Touched on that talk actually before I feel like we had a topic that was doing something kind of similar with messing. Maybe it was with like HTTP headers or something to try to like cause the browser to do thing or yeah. Like the browser would respect it but a scraper or burp or whatever. Wouldn't I forget exactly what the topic was but I feel like we have touched on it briefly before. But yeah, that's actually one of my favorite Defcon talks. So yeah, probably mentioned it before. Yeah. I think I've checked it out. I don't think I watch the whole thing, but I watched a few minutes of it at some point before. Yeah, I just it's one of those fun ideas where it's abusing the discrepancies between like the browser and how the automated scanners work. But like Alice if and Chad said it's it's kind of counterproductive, it's not something that you should actually do. It's it's basically a sto which I'm not a huge fan of So, but, you know, the concept of it is cool. Even if I don't really like the, the end goal. Yeah, actually used your security should not depend on something like this. Yeah, exactly. All right. So we'll get into this more serious topics. Now Chrome, 90 has dropped, and with it has come with a control flow enforcement Technology support on Windows for the latest CPUs. Being until 11th Jen and Zend 3 AMD. So we've talked about CET, T a little bit before on the podcast and Shadow stacks and Shadow stacks. For those who don't know, is basically where you have like a clone of the stack that gets verified against or used exclusively for control flow and specifically, a clone of the stack of return values. It doesn't store the other value Sophie on the stack. Yeah. So what? Yeah, what that provides for you is some backward Edge CFI. Right. So if you have like a stack Overflow or something, and you smash return pointer, you're not going to be able to get Code execution because even if you can control like kind of get control the flow, it's going to detect that and it's going to block it. It's going to crash the process or whatever but yeah so Chrome is finally ships support for CET it is notable though that it's only backward Edge. It does not have forward Edge CFI, which is extremely notable, notable and a case like a browser because V tables are everywhere and are an extremely common attack. Vector for getting code execution, Ocean and browsers. So but I mean using a rope chain is also very common so even if you corrupt that V table and then you go into a rope chain, it's still getting blocked with that level then that's true. I mean, they only have backward Edge. I don't know. I'm not sure if they have any forward Edge, like, just at the software layer within Chrome that I'm not sure about. I don't, I don't believe think so, because I mean, it's hard to kind of do that with jit. Oh, that's Ed. Like Windows has, doesn't have support for the forward Edge, either using CTX the hardware version. So, because this ends up relying on the Windows implementation to some extent. It is up when the exception happens, it hits the operating system who decides whether or not to let the program continue. So with ELD, Windows supporting it, they won't be supporting it either. Yeah, and the other thing I'll mention you just mentioned yet, they also note that this won't be enabled and the renderer since the render can already access read. Write execute memory anyway because it has to be able to do jit. So I think they said because the renderer would be like not straight forward to support CET. And and the fact that it wouldn't really be that useful. They just stayed in there that it's not going to be enabled in the render. So yeah, I need they are very Clear here, that what the mitigation will, and won't protect against. It's not going to stop every attack obviously, but it's just another layer to add in there. And especially in the browser space, we've seen, especially this year that it needs it. I mean, how many in the wild Chrome attacks? Have we covered this year or bad? We don't see it was covered in him. Well seen in the last month, we're Bunch dropping their few weeks ago but yeah, I appreciate it. This topic. Actually a lot just because they were very realistic about what they're getting out of this and what they already have. They do talk a little bit about CFG which is Windows. Ford Edge control flow Garden, which doesn't exist. I just mentioned how they don't have it but that specifically utilizing Windows CE are sorry. Intel CT. They do have Ford Edge kind of in software layer. But yeah, I mean they were very realistic about what you're actually getting with us. And I think there's a lot of, I guess, kind of concern, you know, especially after the windows, internal posts are rip raw that I believe we talked about on the podcast quite a while back. Especially after that, this is just kind of a bit more. I won't say more sane because the riprap was actually fairly saying and just the title was a little bit. Excessive, I think if you only read that, Yeah, so overall I think it's a good move by from the biggest limiting factor here though. I think is the fact that, you know, CET is something that not a lot of people have not a lot of people have Zen three and he's or 11th gen Intel, especially with the crazy silicon shortage going on. So unless you have one of those newer CPUs, you're not going to be able to really take advantage of this so not going. Yes, exactly. Going forward. This will be more
zi
Are
Specter
useful down the line when more people can get access to that Hardware. But at the moment, it's not really too much of an issue for attackers. So yeah, just be aware of that, that it's not the end-all be-all by any means. All right, so we'll get into some of our exploits of the show. Our first exploit is a rate limit bypass and AWS Cognito, which could allow password reset tokens, to be brute forced. The root cause is kind of fun, it's a race condition where you could exceed the limit by just sending multiple concurrent requests. So instead of being limited by the 20 guesses, per hour that you're supposed to be, you're actually limited limited to 1587 that they measured per hour. Well, in theory, you could probably go beyond that to that. It is the maximum they managed to get through. Yes said, I've said this last time we talked about the race condition web but I love these vulnerabilities. They're just they're so simple to exploit and they usually have a fairly reasonable impact in this case, you know, getting around the rate limit on checking reset code. So you can check a fair few more than you should be able to and raise our sorry. Reset coach. Generally are not the strongest in terms of like the amount of entropy I think they Play that they have roughly like 20 bits of entropy here, so like each hour. Yeah, they were able to test around .16 percent chance of getting the right code. But yeah, the issue just comes down to. I we don't have the actual code but presumably request comes in. It will increment the rate limit or it will increment, you know, a counter on how many requests were made. Next request comes in it goes, hey, is this rate limit too high? If it's not, it will increment it again. Let it go through, if it is and it will reject it. So, by sending like a thousand requests at once, several from can pass that check before the rate limit actually gets incremented for everybody. Yeah, so it's not something where your account is instantly going to be compromised. It is quite a significant jump though, because like you said, it is a point 16 percent chance, where normally it's supposed to be like a point, zero zero, zero zero, two percent chance of being able to brute force it, which is significant because it means it can lead to your account being compromised with, in 650 hours, instead of like 50,000 hours. So, you know, it's still quite a bit of time but it's not as much as it should be. So, Yeah, still a significant issue. It was reported March 22nd, and it was fixed by AWS on April 20th. They say they don't have any evidence that this issue is being actively exploited, which isn't too much of a surprise. But yeah, I just thought I'd throw that in there since they mention it. But yeah, I mean it's it's one of those things where it's easy to miss when you're writing it and doing like overview, just forgetting the request can come in, concurrently Not thinking about race conditions in the context web. So, yeah, it's like sit easily overlooked and we see it a lot. A lot of the times though, it's not are. Well, I was just saying how Loft times he had of coming up with this, where it is impactful. There's also a lot of cases where it's not all that impactful. It's like, you know, some random increment that just ends up being a little bit wrong. That doesn't actually matter. But there are a lot of cases like this, where, what's being incremented does matter. Yeah, and another case that I remember we talked about before was like, I think it was in C T FD, but it was a CTF platform where, like, you can submit Flags multiple times because it didn't mark it as solved. Yeah, we did. No way. We weren't told what CTF platform, it was just the author. Just said it was a CTF platform. Oh, right. Okay. Fair enough. I thought it was c t FD for some reason, but yeah, that rings a bell better. No one wind but yeah, I know the author. Never attributed it, we're only up there. Vulnerabilities. Yeah. Good clear good clarification either. But um, yeah, just a fun issue. All right, so next we have a blog post on an authentication bypass in an Asus gaming router because I guess we need RGB on routers now too. So they found a vulnerability when digging into the admin panel Exposed on HTTP, they identified a set of circumstances, it can lead to a session cookie, not being properly validated for the session State and that's Because when they do the off checking, they do a stir comp comparison for comparing the token against the iaf Triple-T token. So, if the device has been configured with that token and it's left null. If you set your token to null, you can force that to evaluate to true and and this is definitely not a new issue. We've covered issues like this before on the podcast or similar issues. There is an additional check on the user agent that you have to pass as well. You have to set it to like an internal service. I d string, but they just kind of touch on that in passing. They don't go too in depth on it. Yeah, they don't even show like the code that checks that, so I don't know that that did stand out to me because they make a clear, there's this requirement to do that and then they don't show us why there's a requirement at any point. I mean it's not that unexpected that like a service might be using that but I'm not sure because it seems like this functionality. Like the reason we're getting in is because it thinks that we're IFTTT or if this and that application Ian. It's comparing their token. I wouldn't expect IFTTT to be using the a Sue's service user agent like that doesn't seem like a reasonable expectation. I'm not even sure if you can control what user agent IFTTT will use So I wish they would have included that bit of information because I'm genuinely a bit curious on why it's necessary. Oh, you know what? Now that I'm actually talking about, it might be the case that they do. Once you're logged in there, seeing what user here attend, Kate as and IFTTT, you know, whatever user agent, it does use my, it might check for that and decide that well this is I have Triple T. Therefore, it only has access to a limited set of functions, whereas, if somebody's accessing using the service user agent, it would be like have access to everything so perhaps something like that but they don't really clarify why at all Yeah, it would have been nice to get some more information on that but I guess they figured it wasn't really relevant to the issue they were covering which is fair enough, I guess. But yeah that allowed you to bypass authentication and perform admin actions on the router they went into quite a bit of the re here since like there was no Source a little bit too much, I feel. But if you enjoy the re then maybe you'll feel differently on that and you'll appreciate that aspect of it. Asus released a firmware update the past that issue at the end. Last month. So, yeah, that's, that's how the timeline looks for when it was fixed. I don't think they State exactly when it was reported, so yeah, can't comment too much on that. But, yeah, that's how, that's how that one worked out. Alright, so we have to vulnerabilities and a false padding, Oracle in Asia functions. This week, this is a blog post by polar ply. Will go through the vulnerabilities first both of which were accessed through the environment variables, and then we'll get to the The false padding Oracle. The first is a privilege, escalation through the SCM run from package variable, which would be used to download in the Azure function, package. It also had read/write permissions attached to it in the request, which meant if you could mess with that environment variable, you can get it to overwrite the function package and inject, your own code, which you could use to give yourself a persistent back door that would run every time the the function was invoked. Well. So you wouldn't need to exactly modify with the Armored variable. The environment variable includes that, you can try to see where they censored it out here. The Sig equals. So that's the It's my highlighting quite work there. So that's the SAS. Or was it like signature? Something cygnus something accounts and basically, that's where it's getting the permissions from, it'll sign like that token, and then you can go and access the actual storage for it. So because that token or the SAS is actually, Scoped to have write permissions, you can replace the file that's going to download. So you don't have to change the environment variable. You can just straight-up replace the file that it goes to grab. Right up that works, huh? It's, you know, similar thing with S3 and having the signed request there and just just since you were talking I thought I'd look it up. SAS is shared access signature. So yeah. That's a Yes, that's what it stands for carrying on from that. So you could back door it, if you got a code execution, that is a somewhat big ass. But I mean we talk about people getting code execution all the time. So it's not that crazy. It's just, you know, getting code execution in the Azure function Yeah, so the second bug was through different environment variable, which was the container start context SAS URL which contained the encrypted context for the configuration that initialize that outer function. So by pulling out the SAS token from that URL, they were able to list other Azure function, configurations that they didn't own, though that was limited and impact because it was an encrypted context, you would need the AES key to be able to get the plain text anyway. And that's
zi
To
Specter
where the false padding Oracle comes into play, because there was a public endpoint on the mesh HTTP server, which contained an auth token field and the HTTP headers. And if you sent a malformed token, it would give you an error message and tell you if the padding was incorrect and it would try to decrypt that header, using the same key use for the container context, as far as I understand. So they tried to abuse that to chain with previous issue, in order to get like a full and Fleek, sadly, though they couldn't get it to work. And after looking at the binary, the ran the web server. They discovered that the code for performing the on padding was actually broken or just incomplete. So because of that it was kind of like the the two negatives. Make a positive. As I said in the blog post, the one bug canceled, the other bug out and that padding Oracle ended up not being useful. So, yeah, I really that's wish it would have landed. This is one of those bugs. So like, if that would have worked out this would just be a really awesome chain. That said, I do kind of wonder if the decision if the sun pad implementation was deliberately deliberate in terms of how they didn't Implement everything. Just so it wouldn't be vulnerable to a padding Oracle attack. Yeah, I was looking at the, their implementation compared to what they said, it normally should be and it was a bit strange that they would do it like that. So yeah, I don't know if that was a little bit or not because I was like, yeah, my head, they don't state if it's deliberate or not, we don't actually know that. It's just they don't actually do the checking for each bite. They just kind of shortcut it basically. Yeah, so if it wasn't deliberate, then Microsoft got really lucky here and and the researcher got really unlucky, like I feel so bad because I would be kind of upset if I found such a cool Avenue and it was like this weird random thing that you would never expect just completely tanks. Sure your chain. So yeah, really unfortunate for the researcher is it is kind of fall so I will call O. Having overly verbose are messages. It's one of those. Things that gets reported that it's often just overlooked and even though it didn't work out, in this case, it's a good example of why those error messages. Why? You shouldn't have excessive error messages. Like you should just have like a narrow Corridor. Some minimal information for an attacker rather than like sharing an actual exception or stack trace or something. Like in this case, Because that is it can disclose a lot of information that he might not expect. As was the case here, even though it didn't work out, it's still I think a good example of why it kind of matters to censor your error messages. Yeah. Yeah for sure. Alright. So we have a post on Cap'n hacking Google app engine anatomy of a Java bytecode exploit. So this is a extremely long post that goes into a lot of background so we're not going to be covering everything. In fact, I think you can skip of quite a few of the background info. If you wanted to get just straight to the issues, which is probably what I think. That's what Didn't see. I know it's pretty much impossible to read all this in one sitting. Yes. Um, I mean, it covers like it starts off covering a bug that it doesn't actually end up needing to use. Still really cool bug. I'll touch on that in a moment then gets into the actual bug and then they're the most of the document is actually about crafting the final bytecode and there are some issues that he runs into with doing that. Just there's a lot of detail and a lot of methodology, a ton of information. There were only going cover the core of it, but Actually, before we do that, I do want to touch on one thing from LM on PC. Just mentioned chat and this is related to our last topic. Silencing the air sounds like security through obscurity. Fair point, I haven't really considered that. Oh, you're the security of your application again. I will argue should not depend upon. Simply the fact that you don't disclose error messages. But that doesn't mean that obscurity is a bad thing. It just like, you shouldn't have. Your security depend upon it. I shouldn't be your only line of defense like I said it's not that it's bad. It's just it there shoot you should have other things with it. That's all. Yeah. And that's kind of what I was saying earlier to like your with the burp. Topic, your security shouldn't depend on blocking burp, users or detecting burp. Users if you wanted to do that, that's really annoying and it's actually probably hindering the actual testers, who might be using it But, technically speaking, it is adding a little bit now, very much, but it is adding a little bit of security similar here by, not disclosing error messages, and giving a more actual human error message. Perhaps, you might actually be improving the user experience, because they don't care that it's bad padding, they're not going to know what that is if they see that, but it does help an attacker. So I mean I'd say like yeah, your security should only be dependent on that obscure. But it is something that there is benefit to some obscurity, I guess it's not. I don't think it's like obscurity is completely Useless when it comes to security. All right, so jumping back onto hacking Google app engine. That long post. A lot of details in there. I'm going to focus on the vulnerability, but the first vulnerability that ends up being covered which I thought Was kind of interesting to at least talk about ended up taking advantage of the fact that X I get some context I should get into it first with app engine, it would perform you could upload your Java class files that can execute them and it would rewrite the class files using this ASM Library. So I'm going to be referring to a SM, referring to that library and not normal assembly. It would Implement certain checks. So in Javi, you've got classes like the reflection class or you've got class loaders to do custom class loaders and assign permissions. With those classes, you can ultimately get code execution through them. So obviously they're letting you run some Java but they're not wanting you to just be able to run arbitrary code within app engine and like outside of their own little sandboxing. So, the way they went about doing that without restricting, the actual authors of any, if the app engine applications is to implement kind of a runtime instrumentation. So they would add in some extra security. Checks will be used a reflection class like it would add in checks to make sure you weren't trying to access any of the dangerous functions at runtime. And the way they did, that was using this ASM Library. I'm actually would parse the jvm or would parse the class file, parse the bicone. And inject that where necessary So the vulnerability that they end up finding the first one that the enough finding which I just thought was interesting, was that normally in a normal class file you would kind of have your stack locals and code length Fields as part of this code actually and they would have like two bites, two bites and four bytes respectively and in very early Java class files this is before version 1.02 which was the first stable Java. The sizes were actually just one by one bite and two bytes respectively. So you could end up creating a class file which effectively would get treated inappropriately because most things weren't really aware of that pre version. Forty five point, three for whatever reason Java, 1.2 with the bytecode version that's included in the class file was 40 was 45. 5.3. No idea why that is probably some history reason. It's not really included. And here and I don't know it offhand, but you could basically take advantage of the fact that the size is were different and to craft a bike code that the jvm would parcel one way. But ASM when it was doing the rewriting would parse another way so you can have different bytecode, actually linked I believe to a crack me. That actually took advantage of that issue which you can check out and just Linked In the post itself. I won't add it to the description, but that's kind of the first issue. They didn't use it because in doing that code injection, the app engine would automatically set the version 249 as your minimum version. So we couldn't really take advantage of the difference in parsing. I still thought it was a really cool vulnerability where it's kind of reading up the actual vulnerability that they
zi
Just
Specter
had to do with how strings were stored and specifically the string length. When ASM was reading the strings reads, the string right to mode is like the Java string, Java shrinks, you get two bites at go to the length and then that many bytes of actual data. Not really a crazy structure prepending the size, so you always have hurt and not needing a null Terminator. So, you can actually include nulls in the string, including nulls in the string, they used em utf-8 encoding. Which I was unfamiliar with. Apparently, it does a couple things most important here is the fact that nulls are actually Rewritten as a 2 byte sequence. So, it doesn't actually include nulls in the string itself or in the file itself and that can just allow some other functions to work. so, Getting back on the size. What was wrong with ASM? Is it didn't actually check for any overflowing in the strings. So if you can make a think that you had a 65536 by training because that that just overflows, into your third bite of Sighs, it'll truncate the first by know, just write 0 as the actual size. Which of course in jvm when it's parsing c0 it's like oh, the string is done immediately and start parsing the data of the string as more byte code allowing you to inject by code that ASM won't see. So it won't be able to add its instrumentation into it. The Challenge end up being how to actually provide a class file that ASM could read, but the jvm couldn't are sorry. I don't know why I said that, that ASM would read and have that wrong size because obviously that's too large for an actual class file. So you couldn't actually exist so it took advantage of the of that mu T fa that I mentioned. We're ASM would actually accept strings that had the no bites and they will just rewrite them into the, to bite replacement. Whereas jvm just one to accept that at all. So because of that difference are able to craft a shrink that has one of those no bites in it, to get a just over the size to make it think the size is to wrap around. The sides are overflow the size by including the no bites that allowed them to effectively inject. Depending on the number of no bites, as many bite of my code as they wanted. from there, the rest of this just basically, when it's actually building up the exploit, Are building off the bike code. That would work and like said a lot of difficulties with that, but on a whole fairly interesting. Yeah, it's a really cool attack in a way, like I know it's not exactly like it, but the attack complexity kind of reminds me of like the jit based bugs that we cover and browsers sometimes in the way that there's a lot of background information. There's a lot of complexity and to how it works and how you exploit it and what the vulnerability even is. But it's, there's like so much background information here that if you wanted to learn a little bit about, Out that area, then you can right there and search breadth of information. Unfortunately, I don't think I mentioned this I should have had to start but this is from a 2013 so this is a lot like this isn't some that still relevant kind of indicates that ASM checks the sides now. Java has obviously changed quite a bit since 2013 so the vulnerabilities from 2013 but the right up is from this is right up is new okay, I just wanted to clarify that To make sure. Yeah, that's interesting. It's, you don't know, but it's not too often. You see people go back and write up stuff that's that old. But yeah, well, he mentions that he kind of sat on it for a while. This was apparently he was a intern at Google and just during the last week of his internship, he was allowed to try and attack app engine, which is where he ended up doing a lot of this mentions that he kind of recreated, the code, this isn't the original code obvious thing being app. Engine is kind of Web app or is he, right? So he's like Main and stuff. He actually touches on the those differences kind of, get ahead. If I'm on the whole though. Like yeah, like I said, a lot of background in here. Likes it very long heart. Actually cover all effect. But like on a whole, the vulnerability is just mess up in. Calculating the well in accepting basically the wrong type of strings and rewriting, it leading to that size overflow. Like it, it's kind of a simple thing to understand. You just kind of need to understand all these background components in order to get why that matters all the contacts? Yeah, yeah, I mean, that's the only thing that that kind of sucks. But this blog post said, it's so long that it soon. As you see that, scroll bar, it is really intimidating. It's kind of like some of those longer P0 posts we've covered. Although this is even longer than those I think so. Yeah. I mean it's a very long read II, couldn't read it all in one sitting for sure but Yeah. I mean that there's a lot there for anybody who's interested in the links, we always keep the links in the description for anybody who wants to go into the background info even though we don't cover that here on the podcast. And I guess I will mention the remediation here was that the app engine basically just alter the system. So after I ran a SM, it would run it again on it. So no part to check for those discrepancies. So eliminating the entire class of issue. Yeah, up next we have a bug in Facebook product called workplace which is like Facebook but for Enterprise which I'd never heard of until this topic. I don't know if you had zi but yeah it was kind of a hundred times. Never use that. Yeah, so pretty quick and straightforward issue. Workplace has a feature called self invite. Where if it's enabled, it allows anyone with a company email or a whitelisted email to join. Problem is though the validation on whether or not the domain is loud was not actually enforce on the server side for whatever reason. So I just capturing and changing the request. You could substitute the email in for a non verified one then join the workplace. Yeah. So that it stood out to me here. So the way he goes about actually doing this is he takes a normal request for the invite and then changes the community ID to another community. So it's not said, like I read this and I think HS never bother verifying that email at all. But my guess because of how he went about this, anding to intercept the request from like a good invite or more from an actual look workplace, rather than just being able to apply directly to the other ones, I would guess, there was another server side check. That just happened in another request rather than It just being the fact that it never does it it just it doesn't do it on the actual accounts, self invite and point or like when it's actually creating the end point. Yeah, so, a bit strange, it is worth noting given the wording, the of the article. It seems that this self invite feature is not enabled by default. It's something that you have to turn on and you're like, organization or whatever. So, although for most the few companies, I do know of that were using workplace, did have it on that said that's, you know, anecdotal that's just my personal experience. So I think that's fair to add because when I was reading that and I read that it wasn't enable by default, I was thinking like the Bounty payout gets really surprising when you factor that in because they paid out 27.5 k for this bug. Like it was a 25 thousand dollar payout with a 2 and 1/2 K bonus which is awesome for the researcher. But that really caught me off guard, given the fact that it's not something that's enabled by default, so I'm guessing it's something that even though it's not enabled by default, since it's used so much. It didn't really matter, I guess. But yeah. That struck me as a little bit odd, but with that, anecdote put in there, it kind of makes sense. If I take it at, yeah, I'm like most people just use it so it might as well be default basically. Yeah. I mean, we don't really get information on How they decide the payout? Yeah, we're completely in speculation town there. But yeah, I mean we visit there sometimes because we just can't get all the information. But yeah, I guess part of it though. They do mention. I'm actually just reading the Facebook thing, see if there's something new it, what if allowed a malicious user taxes? Companies, workplace environment by that invite. I guess the other thing was, you didn't need to know the company or a community ID, which he did find an endpoint that would allow you to find that. But at the time of the report, he didn't have that or at least the initial report. But yeah I guess even reading the Facebook response here doesn't really include very much we sides. The fact that he had a bonus on there for being in the Gold League of Their hacker plus program which was just twenty five hundred dollars. So doesn't explain the whole Bounty. I assume that just because it is like a fool, like you do all timidly, get full access to the workplace, which you should not have. But it said it is an awesome payout amount for the researcher and like that's one thing that's cool to see with covering the Facebook issues is the fact that their payouts are really good generally. So I mean Facebook is I think they're they might be the highest paying company. I've seen when it comes to web issues that we cover a lot on the show. So yeah, the pants. I'm a musical tastes so good. So yeah, who was another one? But yeah, I mean that the payout amount isn't too surprising. Rising on its own. It's just that aspect of it not being on by default, but whatever. That's that's by the by. All right. So we'll get into post from a friend this week on this week's episode. And it is also our first valve issue in Source engine that recently got this closed. So those on our server who have the zero day fans feed or go to the website or whatever might have noticed there were like five Source engine reports that were disclosed last week evolved went on a I rush to fix a bunch of issues. I guess because of public pressure, which honestly, I didn't think they would respond to, but in fact they're really been just one more disclosed like 15 minutes before we started the podcast awesome. But yeah. So there's been a lot of stuff that's been disclosed. Researchers are able to talk about stuff they've been sitting on for a while and yeah our friend here was also able to write a blog post about it. So yeah, in this post GPS talks about some background with Source engine, as well as using Freda for testing, on both the client and server side. I won't go into that here but it's definitely a good read that you should check out if your looking at Frida for other projects or you're just interested in what Freda could be used for. But yeah any Freda is an insanely useful tool. I've actually surprised that he's all the associated more with mobile so I've mostly I don't think I've I've used it a couple All times I'm Ball but I've mostly used it actually for Windows apps. So yes, fucking backgrounds. It's funny how much afraid has used in the mobile space, though. Compared to everywhere everywhere else. But yeah, so he goes into detail on to vulnerabilities. He discovered an exploited in Source engine, both vulnerabilities were out of bounds writes. The first out of bounds or sorry, out of bounds reads, not writes. The first set of bounds read can allow you to get an arbitrary pointer to an eye client networkable object and that's because they have a Not properly check user provided that index into the global table of them basically. I think they check the it's an integer truncation issue. So it'll take the lower 32 bits and check those but it won't check the upper 32 bits and it's a full 64-bit index. So because of that bad check which is apparently fairly common across multiple message, handlers for Source, engine knife, aniket around that that check. Yeah, so I think the very first one like getting that Network the what was that I clean Network well that one just had a couple of certs that end up getting compiled out. So I did check the Max Max number and it did check to make sure it was greater than 0 but since they're in asserts in production builds a day and include that, I think what you were getting into is the second one that had actually. Yeah, you're right. Sorry, I did get them confused between the first and second issue. This one was just a straight-up X1. So it's a little bit more of a meme issue. So yeah, I mean it basically allows you to craft a pointer and you can fake an eye client Network or Bowl object anywhere that you can. Like if you can set up need data that you control on the Heap, or the stack, whatever and you can get a pointer to there, you can craft a fake object, which means you can get code execution because that object has a vtable inside of it and you can set up fake objects with service at con bars. That on its own, though couldn't be full chained because there is a SLR and play. So there was an info like needed and you also needed like even if you could use a non aslr module and I think csgo, has 1 xn put, you do still need an info leak because there's multiple layers of indirection because you have to be able to fake the table too, and be able to find where that's located. So, yeah, that's where the second bug comes into play, which is the input leak. It sounded like actually, The conv ours were stored in. Static low boy. He made it sound like the location of that was into variable, so he could have used the xinput if you wanted to, but only didn't because of he touches on it, when he gets to the rauf a bit. Could have gone to bail using X input dot DL. But didn't, because he wanted this to work on all sorts games, but it does seem like he's got fairly a fairly reasonable place to access Factory setup that variable and we'll know where it's located. Like, it seems like it's fairly consistent, so he did state that he wanted it to work on, like both csgo and TF2 and whatnot. That said, when I talked to him a little bit, he I think he mentioned that he Still needed an info leap. Regardless for that bug, even if they just wanted the target csgo. Okay, then I miss her at that. Yeah, no worries. So, yeah, that's where the in Foley comes into play. It's a parsing bug. When parsing the size of a zip file, when it gets uploaded to a server, the variable used for tracking the size is a signed integer, and the size check is a size to compare. So you can kind of guess where that's going by setting a negative size. You can get out of bounds stock content sent to the server. Now it is a little bit weird when it comes to how that attack works, because it's not an intended Behavior zi. And I were actually a little bit at odds before we started it. Like, exactly what's going on here. Because there's kind of zip ception happening here because basically when you connect to a server and csgo like a surf server, whatever you get a map file sent to you and that map file can contain a bunch of different stuff including its own zip files inside of it, which are packed files. Obtain game Assets. Now, this info leak was Vector to through the pack files, that are inside of the map file. So there's there's a little bit of inception going on there with like levels of embedded files, I guess. But I mean, I don't know what the structure of the actual map files are. Like, I mean, ZIP make sense, I don't see anything but he makes mention that the pack files are used space clean overlay where you provide the PAC file and when it looks up, like if it needs to grab some file, Hello. Like a texture. It'll look and see if it's in the PAC file first. If it's their returns, it from there rather than giving the original version of it. So I assume it's being used to like overwrite assets that are provided by like csgo or by the game itself. That could be used or that are all like the original map and then the pack file just got overlays and original map. I'm not exactly sure how the map system works. That said the actual attack is in that File, where? You can have the server can ask the client to upload a file and it will actually even though it's getting that pack file or the clients getting that pack file from the server. If the server ask for a file that exists inside the pack file, it still respects that pack phone, still sends it out from there which is why it seems like a little bit of a weird practice and I mean the server knows what it's sending out or should know what it sent out. But regardless, when it's reading that, it's going to do the size compared to see like, hey, is this file too big for me to even sent, like, doesn't want to send, you know, anything more than whatever size I Specter just said, does that as a sign comparison, so you can get it too. Are you set a negative size in the PAC file and it'll try and read? It'll think it's less than the expected size. So when it goes to read it it already know like the four gigabytes or whatever. I believe this. 32-bit. So don't read out the whole 4 gigabytes like off the stack or wherever. And that will give you, you know, your information leak that you need and a very good one at that. So a lot of, a lot of things are leak there. Yeah, so yeah, very interesting. Exploit info leaking, and games is the biggest challenge. I don't know how much we've really talked about it because we haven't been able to talk about Source engine where valve was sitting on the books for so long. But the biggest challenge of source, engine isn't really finding memory corruption bugs, because Source engines, very old. It's it goes back to like the Thousands, you can find a lot of me mobile issues in it and and even issues that you could take to code execution. The problem is, though, is the fact that a SLR is in play means that you have to be able to bypass a SLR and info leaking is just it's so difficult to do in games because most of the bugs you're going to find are going to be parsing bugs, so your info leak. Vectors are very limited. So it's something that kind of kills source of tax more often than Not, I guess unless you put in the effort to really try to find those info leaks. So it's cool to be able to talk about one whether they're kind of rare in that regard. So, yeah, I mean the exploit is cool but the Frida and set up background information. Here is, is what I think I found the most interesting personally just thinking about how that could be applied to other targets and whatnot. I think that little background excerpt, it wasn't super long, but the bit that's there, I think was Kind of a good advertisement or Frida and a good way, you know what I mean? It was kind of pushing Frida into why it can be useful and why people should start taking advantage of it. So yeah, I think that was what I found the most interesting personally from the from the article but yeah, overall just a really good read all the way through anybody who is interested in binary exploitation and Source engine specifically it's a great post for that because most of the valve issues that we've seen have just been the ha Acker one reports. And sometimes we don't even get the actual report details. We get like a summary. So, yeah, I haven't really seen any blog posts on Source engine GPS is the only person I've seen who's done that and I will quickly shout out as well. He does have a like this is a part two. He has done a previous post on Source engine before as well. I think it was posted in 2019. So there's another thing you could read to. If you are really interested, if you read that you've read his entire block, he has to pose. Yeah, lots lots of posts, but they're very good posts, you know, he's kind of going for that quality over quantity. So Yeah, with that said we'll move into our next valve issue which is a hacker one report. This is a submission from sliding back which is another out of bounds read and Source engine. This is what I kind of got confused with the last report. This is the report where there they discovered an anti pattern. Where the end of the indexes, they check the lower 32 bits, but not the upper 32 bits and that's because of the integer truncation issue. They're just storing the the unsigned long or unsigned long long into Integer. So when they do that check, it's kind of pointless. Because they're, I mean, it's not because if the attacker only accepts, those slower 32 bits, it can still block it, but it can be walked around. Don't know what you're talking about. Oh, so what I see here in this bog and I mean, maybe you can go ahead and explain what I'm missing here. They're grabbing the message entity index. Yeah, line five and then they check the lower bound of it. That's This is a greater than 0 and then, the exit. So you can access if you provide any number larger than 0 and larger than the number in the array, you can access it like I don't see any truncation happening here. It's just the fact that they don't check if it's less than 0 and they're using assigned. Integer. Okay, so let me just look at the code really quick because I was under the impression that they were using a full 64-bit size or index. Sorry, and they were only checking the lower bits. So I need to reconfirm here, then I must sound the same in, some of the comments. I only read the report. Yeah, sorry. I'm just trying to look really quick. Okay. Mmm. Okay, maybe I was getting that confused with a different report that we dropped maybe or something. I didn't rupture and wasn't that either. So I'm act. I'm legitimately not sure what you're talking about. Maybe I'm just maybe I'm just crazy. It's probably what it is. Maybe you're talking about your own O'Day. Maybe maybe I found a node a and I just dropped it on straight. Okay so sorry you're right this is another like sinus issue. So yeah I thought they had a problem. With the okay, whatever. I must just be wild. All right. So yeah, if the issue is actually what zi point of though is the fact that the index is signed and they just kind of use it there and they try to do the zero check. But obviously, because it's signed up, that's not really going to help you there. But you can access a negative indexes into the a'right. Well, so because they check that you can't access a negative index into the array, but you can access beyond the end of the year. Array. Like it doesn't check the upper end of the bound. Yeah, so in the report, they abused the glow prop turn off message, they state that this problem exists and other messages to though. So very much like GPS is exploit, this can be used to craft a fake object and the entity cash. And again, this has a vtable that they can fake to get code execution. When getting the base entity, the support does get a little bit weird when it comes to the info leak. So yeah, for those who don't know, the specifics around valves Bounty program, because of what I stated earlier with memory corruption being so much more common than the info leaks. A couple years ago valve. Stop talking memory corruption issues and they would just stop like accepting them. They would just be like, yeah, we don't care close. You had to submit a full chain with an ASL, our defeat. So that's kind of what happened in this issue. They're like, we're not going to accept this, unless you can pull chain it. So the researcher. Did eventually go and find an info league and report. It can't talk too much on that though because that info leaked report is private. So yeah, can't talk too much on that. It is a bit weird though, because the info leak required the client to enable the SV uploads parameter, which is quite a big stipulation, because that variable is not enabled by default. So the in Foley wouldn't be able to work unless the client had set that themselves. Um, or set like some other variable, that was kind of tied to it. Like I think SV cheats might have been tied to it. But anyway, they did get paid out by Valve that took over a year, for this issue to be resolved. The issue was reported, February 29th and the info leak was found about 10 months ago, February 29th of 2020. By the way, it was then reported to valve and by the hacker, one triage team, six months ago and then it was fixed by the of a few months ago, in the beta branch. We've already criticized valve on the response times there, so I'm not going to go too much into that they did get the full seven thousand. Five hundred dollar payout though, which is now the max for critical issues it used to be 15K but they've changed it down to seven and a half. Yeah. So it's funny because a couple weeks ago, I kind of mentioned this at the start of last topic but we are talking about valve and we were kind of hard on them. We were hammering on them quite a bit. I personally said that, like, there were some pressure from like secret Club to get these issues fixed. They had an issue that was on fixed for over two years and I was kind of thinking like valve is never really cared that much about the PRI angle. They've always kind of just kept it as like, a thing in the background and just tuned it out. So I really didn't think they would care about these vulnerabilities and getting them fixed. I thought it was just going to keep going on being in the bad state that it was in. But honestly, I was wrong. We don't we don't know what valve is going continue to do. They might have done this because they were kidding, eat for it and we don't know if they're going to continue to do it. You know their history is that they're going to ignore them, they can be helpful. I mean it's a positive step. It's a good reaction, but we do need to see a continued. Yeah, don't want to go too too. Heavy on praising valve because like you said, we got to see how it goes on the Future. These did seem very rush. I mean, like I said, we saw I think five or six different valve reports that were all fixed in like a week. So I, yeah. I actually check with. And we had eight hit O'Day fans and then there were actually even more than that. Just on valve. So day fans only reports like high and critical. Yeah, so valve went crazy fixing issues. They went on a spree and are still going look. So there was one just 15 minutes before the podcast, and even with that one, it's from 2019 said they're still, they've still got quite a backlog. Yeah. So I sent a tweet out saying, you know, we do have to give valve some credit where it's due at least they did fix these issues that have been sitting in the back log. Like you said though, we got to see how that continues into the future. I don't hold out too much optimism. Honestly, because the way that things work at valve is people just work on what they want to work on for the most part. So it's kind of like, if somebody doesn't feel like fixing issues and they're just not going to get fixed. So yeah, I mean I don't hold a Much optimism but that said will only be able to tell we'll see I mean looking at your holds its a positive staff. I don't want to take that away from them. It's a good step. I just have questions over if they're going to keep doing it. Yeah, Phil cat from chat says, maybe there's a new hire and valve, who's got a new company excitement. And I mean, that could be true. Maybe they just hired a security person and they were, they were enthusiastically going at it. That would be that's fun to picture in my head. But yeah, I mean, obviously, we're kind of in speculation land on on what's going to happen with that. All right, so we have a checkpoint post this week on qualcomm's, Mobile station, modem soces specifically focusing on Qualcomm MSM interface, which is used for Droid. A lot of the post is background on qmi and the services that are provided on the protocol stack on the pixel for is what they they use for the article. So things like the network access service, the authentication Service voice service, and there's like a bunch of other ones too and they also went into some detail about how they fuzz that interface. Basically they identified the Handler routines for the various messages used for IPC RPC and then they reverse the related structures stuff like that. Then they use Q mu hexagon to emulate the modem and used AFL defines the interface. I thought that setup was was kind of neat and was definitely worth covering and their blog post near the end. They talk about one of the vulnerabilities they discovered with that fuzzer that they set up being a heap, overflow in the voice service, The Voice Services call config wreck Handler, I believe, I just want to double check to confirm that. Yeah so two of the fields that are taken From that message are the number of calls to make, which are taken as one bite, and then an array of call context, which are hex 160 bytes per call. The problem is, you can just provide an arbitrary number of calls. There's no checking to limit how many calls you can invoke. They don't seem to State exactly how many, you should be able to invoke. Just the fact that you can invoke like up to 255 of them if you wanted to. So, because it goes to operate on the call context attached to that call. Which is usually adjacent memory. It ends up accessing, like, way out of bounds and can smash areas of the Heap, so, because of how you can control, how far out of bounds, you write, as well as the contents written. This is a pretty powerful Heap corruption. Now, at first I was thinking, it's very likely. You could take this to code execution, They Don't Really demonstrate that. They just stopped at the crash, by the way, but I don't know. It is a little bit hard to say because you like, you are smashing A bit of data like x 160 bytes. It's I think you probably could take it to code execution, but I don't want to say for sure, because I don't know, all the context around around the issue, right? So it's hard to speculate. They do note that qmi is protected behind privileges so it's not like you can just hit it from the Chrome sandbox or even untrusted users, you can hit it from some of the more trusted services like the media service for example. So it's still like it's definitely still an issue. And Is viable for privilege escalation, but it is kind of mitigated by the fact that it is behind at least some privileges. So, just wanted to mention that in terms of timeline bug was reported October 8th and thick sometime in February. Mostly the bug here is pretty boring and we don't have too many details on it. Like I said they just stopped at the crash and didn't really go any further I think the takeaway from the blog post that was probably the setup. Like I said, I thought that was neat using a q mu 2. To virtualize the modem and then setting up a fussing set up on that. So yeah, yeah, I have kind of a soft spot for Getting some background information on those types of setups. So that's what I found the most enjoyable. But yeah, I mean Qualcomm the other thing that's cool about this is Qualcomm is used a lot. It's pretty much used by every Android device in North America. As far as I know, because basically, you have Qualcomm and you have exiting us right? Which is used in like Europe or whatever. So, yeah, I mean, the impact of this issue, even though it is behind privileges, I mean, this can hit Of devices. So, the impact there is still pretty high, but yeah, overall lots of cool background information not overburdening on the background info, but it feels like it. Really hit The Sweet Spot of providing info without, you know, bombarding you with it. So yeah. All right, we'll move on to Tallis Talis appears again this week. This wrong vulnerability, reports covers. They use after free and foxit reader, which is a PDF reader because PDFs PDFs are broken with how powerful they are readers. Commonly use a JavaScript engine, which is a really nice bug farm for exploiting them. And as you can, guess this vulnerability has to do with the JavaScript engine, particularly when it comes to, how annotations are handled. So one of the things you can do in PDFs that set up these like page handlers page, close handlers. So When the page goes to get close, it can execute your callback before exiting and yeah, you know, you can that's presumably for any cleanup or whatever. Well, with the file attachment annotation, which is another thing that PDF supports it'll prompt the user to select the file, which effectively blocks the main threat. So by adding a close Handler and combining it with The annotation and switching the page, you can essentially get the backing object for that. Annotation, Frito is still being used when the dialog box gets Closed. I don't believe you need to add a close Handler at all. Like I don't believe that's a user provided call back. You can see that their code here. They actually provide for the crash, it's four lines, one of which is the function Main and the other one is closed bracket. So it's really just two lines of actual code that matter. And what happens is they do that this style page? Number equals 1, which changes your page there on page 0. Presumably. I think they actually mention that but yeah, changes the current page from 0 to 1 and then adds a File attachment. So on the change from 0 to 1, that's when it's going to queue, just the page, close Handler. So just in general, whatever codes are that, is in a user provided callback thing. That's just its own Handler for like a page being closer, no longer visible. It changes the page calls that are queues up that Handler on a separate thread. And then the file attachment in particular because it this is similar to if you've ever done, like, some JavaScript development and of called like alert or dialogue or prompt, you'll notice that any code that you have written after that function, call doesn't get run until you actually close the dialog. It's the same thing with the file attachment here, it ends up, blocking that main thread for, it's called from But that they close handler was set up. It was just queued up to be called Elsewhere on a background thread. So that's where, you know, the clothes Handler gets cold but this page annotations began to the current this so it's being added on to page 0. Initially there, the actual annotation itself impacts page one which they provide, but it gets when page 0 is closing. Its going to see that anti rotation, axis it try and close it and destroyed out from underneath the Block Main thread. But yeah, it does. You don't have to set up a sort of close Handler. That's unreal, that's just how it kind of works in general. It's just the fact that the file attachment blocks on main for a while, long enough, to let the clothes Handler run, and delete The annotation under it, or free The annotation under it. Okay, sorry the way I read it was for some reason. I thought you were allowed to set that closed hand where and that's they were abusing that that said, when I was reading it and I saw those code Snippets, I kind of noticed the same thing that they didn't explicitly have that in there where they set up that close Handler. So I was a little bit confused on that so that kind of clarifies my confusion there. So yeah, my bad so and I guess close and we're just kind of runs on its own. It's awesome thing to find. Okay. Yeah to be clear also they do get code execution. Because their RV tables with this object. So, with the use after for a you do get control flow, hijacking. We talk about V tables a lot less episode, huh? I mean, it's pretty fundamental to C++ code. Yeah, it's just, yeah, I feel like we're talking about more like issues in C++ targets than normal anyway. Yeah, I mean, as is the case with Talos, we mostly we get a little bit of background info on the issue. Some Snippets of The crash. But yeah, it doesn't go too deep into the exploit process. They focus more on the, on the phone, research angle. All right. So up. Next, we have a very cool Target that we've never really seen on the show before. It's a software-based rasterizer for 3D Graphics. So we've covered some graphics drivers before but this article is around llvm pipe from Meza. So it's something you can use on Linux. If you don't have a GPU Um, so it's cool to see because I did some preliminary looking into using this driver for some PS4 work. Actually, just because of the fact that like the PS4 GPU is so annoying to interact with that. It was like, can we do software based rendering so we did look like do some preliminary investigations into using llvm pipe. So, yeah, it's cool to be able to cover an issue in it. Anyway, in this article, they talk about a stack-based at a bounce, right? When doing Shader translation, when parsing, The Shader webgl will take any data structures that are using that Shader and and it just places It On The Stack. It doesn't like allocated in the Heap or anything like that, which in normal cases is probably fine, a Shader shouldn't be using like a ton of space but what happens when it does, right? What happens when something is placed on there? That's so large that it exceeds, the the size of the stack. Well, you end up with an issue, where you could hit adjacent, stack Pages, or in Initial case here, and an adjacent guard page. So the issue is fairly straightforward. The exploit though. They had to take care to work around a few issues. Like I said, the initial problem they had was that the the Shader that triggered the bug wouldn't work because you would end up hitting a guard page when it tried to initialize the array, which it does automatically a time of application. So that would trigger a crash when it went to write the 0 there, which isn't really too useful. At all. So they had to craft a Shader file and a manner that will cause the array initialization to be skipped, which they do by gating it behind an if statement and carefully, choosing offsets that will get control over the instruction pointer or the program counter. They did add some additional issues as well. So they gave, I think it's really important to talk about why they negating it. Okay. And that's why at least I thought this was really interesting, part of their attack was, when's it happening, 0 initialization, that Spectrum Trend that happens when such I should mention this Shader code gets translated into more of a c code, and they inject the initialization, which is the for Loop just iterates over the entire array setting, everything to zero. It only does said initialization though, on the Declaration of the array, which is what gets gate at off. But it actually creates a stack space for it and sets up the stack frame for such a large array, right? At the start of the function. It doesn't change the stack frame size while the function go. So it's like old-school C and that way. So because of that, that's why by gating it off their able to basically makes that initialization never happens but it's still taking up all the space on the stack so another variable can exist afterwards after it on the stack and actually be kind of in this other frame. So I just thought that was kind of a cool trick to go about it. So I want to call that out. Yeah, sure. They did have some additional problems though, like they had to work around ABS being used to form the rights. Obviously, when you're doing software-based rasterization, you have to be very performant. So yeah, AVX is a way to quickly. Write a lot of data that I believe they fixed by using a Vertex Shader, instead of a fragment Shader. Now, I'm not a Game Dev so I'm a little bit out of my area here, but I believe like fragment shaders are used for. I'm like the color information, like color bitmaps and I think the vertex shaders are used for drawing the lines of the objects, the object frames, I guess. So because of the way, those two different shaders types were handled by just switching, the type of Shader they were using, they were able to get around AVX being a problem. But yeah, like I never really expected to find a blog post that talked about hitting something like llvm pipe and exploiting through shaders. It's a very interesting attack service. So I just haven't really seen much about. But it's, it's one that you think you would think about pretty easily. Because I mean shaders are a language, right? It's not as powerful as the languages like JavaScript or something like when you hit browsers, but it is something that gets parsed and translated into doing like memory. And so it totally makes sense as an attack surface. It's just weird to me that I haven't seen anything or any blog posts to talk about until now. Maybe that's just because it's kind of an obscure area and it hasn't popped up on my radar. But yeah, I mean at least personally this was a like At first for me to see. Yeah I thought it was an interesting post kind of for that reason. I think we've talked about one maybe not in the Shader but we have talked about another GPU vulnerability. I think before I guess technically this isn't jpu. Because of where it is but they do call out that actually getting our sea and useful RC would be difficult because the process is running within a Sandbox. And in this case a SLR and getting an info leak. There's no clear way to get that with this bug. Yeah, when you're talking about the previous CPU issues, we covered the one that comes to mind is like the adreno GPU and Android. I think there was a project zero posts on that so that's another like cool posts. People should check out if they find gpus fascinating. Just one of the take from Chatfield, the cat said, vertex is for the data related to vertices color. Position normal fragment is for, how do I draw a pixel at 10:10? Yeah, so I guess the, the fragment shaders are more for drawing like their purposes. Like if you had a cube, Me more for the surfaces where the vertex is are for the for like the edges and vertices and we're not. All right, makes sense. Thanks for that clarification. Yeah. With that said, we'll move on to our last topic of the episode, which is a zi zi post and it's a privilege escalation in win32 K. So zi. I'll let you take this one away because this was one that I didn't have time to get a thorough reading on today. Yeah, and I thought so we've Actually talked about direct composition before I feel like it was only a couple episodes ago I should have actually taken note of where we talked about it. Do you happen to recall? Actually I did take note of it episode 73, we have another direct composition. So basic idea would direct compensation, it was an API introduced Windows 8. The idea being you could send several commands over and basically buffer them or batch them together. Making just the Call it a process Channel batch buffer, which would process several commands rather than being to make them each in one command. They mentioned something about this being presented at can't SEC West 2017. So oh no, I said Windows 8 for some reason I had my head to our 2018 so never mind that anyway. Vulnerability here I could see some similarities with the last one recover but with this one basically comes down to indirect composition, you can create these tracker objects. They don't go into the details of what tracker objects really are what they're tracking. I don't actually know because I haven't used this API but you have tracker objects and you have a Tracker, binding manager or tracker, binding manager Marshall or something long class name here which makes it a little bit annoying to read. Oh The Binding manager will track pairs of The Trackers so it'll just have like a list of like tracker one track or two and some ID to reference that tracking there. And you can have multiple trackers being tracked by The Binding manager. So the first vulnerability, this is actually a 20/20 vulnerability. And then the issue discovered more recently in 2021 is a bypass for the fix. So I'm going to talk about kind of the original vulnerability. And I should shout out a lot of background on Direct composition here to just on how it works. In general, a lot better, I think in the last post we covered on Direct composition. So if that is an area of interest and it does seem like it's a bit of a bug farm right now, and this is running. Colonel privileges on window. So fairly useful, ER, There's a kernel anti user side. It's running privilege though, so it is a worthwhile Attack surface. So as I was saying it will track basically tracker. One tractor. Oh track, these pairs of trackers confusing like are confusing myself with the language. Oh, one of the things that happens is that if you free a Tracker that's being bound or is and want these pairs, it'll free both of the track. So that's kind of what the binding does. So when you free the one tracker for easy other and it will free its entry from the actual binding manager itself. It's F ring, the entry in The Binding manager. That's kind of interesting. So what ends up happening is when you add a tracking pair it all of course kind of create. This list are entry object, the two trackers in there. Give it a Tracker ID or give it an entry ID. Sorry. And then it will go into each of The Trackers and they'll set the single object, the binder manager object, or think it's just binder underscore robbed. And it will point that over to its binder. So you kind of reference back when you free it, it'll use that reference to find its entry, find its partner, do anything like that. So where the vulnerability happens, is in the fact that if you were to assign a Tracker to multiple different, Binding managers, it only serves the reference to the one binding manager and it will overwrite kind of the old one thing. It's old. It doesn't actually do any check their see, what happened to the old one or if it's still valid. It'll just like you rebind it. So when you end up free and get the binding managers entry it only for he's the one and only phrase it in in the graphic. They've got on screen here, tracking binder a and tracking binder be if you free it on this side. I'd on the B-side, it only ends up freeing the entry on B and leaves a dangling pointer on a. Because it doesn't know about a dozen note. Even look at that finding manager to see what it should free. So that's basically what they take advantage up here. Getting a use after free because the first binder still stores. A reference in theory you could also do multiple managers like 3 4 5 and all of them will be left with dangling pointers and just the one gets freed. They fix this effectively just by checking that binding object field. That I mentioned, just check if it's not null. And then if it is, or if it's not null, it'll go forward with actually adding it into the array. If it is no or sorry, if it is null, it'll add into the right. If it's not know it'll fail and you know, won't addicts. It's already bounced somewhere else. Makes sense as a fix. The issue is that? There's I should have mentioned this earlier but when you free act, it kind of does it in two stages. When you free the tracker, it will find its entry alliterate overall the tracker and trees the tracker list and it will set the entry ID to 0 and then it calls this other function clean up list items pending deletion. So basically uses anything with an entry underscore ID of zero, it's saying those need to be deleted and then this other function actually goes and does that deletion So, the way they were able to get around the patch was that they could actually trigger an update of the entry ID because that is user-provided, so they would just set the entry ID to zero and that would end up triggering. I'll see on the code here that will end up triggering this. Check the pass which effectively ends up calling and not the entire delete that we just saw earlier, but it will end up calling the remove binding manager reference. So it'll end up doing half of the process. It'll still remove the binding manager reference or it'll free The Binding object that I mentioned. The one that's actually getting checked by the patch. It'll free that set it to no but it will free the Flying object, yet under normal circumstance. That's going to be done or it'll be cold. After this, but because we're kind of just setting the object to ID 0, which is the edge case, or like a special value internally, it'll get afraid and you'll have access to the original vulnerability again. And they don't really go into the full exploitation of this. It's just kind of that vulnerability. You do have like a call it calls the function on that, so gained you're able to get a function call. So you do have EI, R IP Control from it but they don't go into the actual like chain afterwards. Yeah, it's funny. Not not really related to what you were talking about but you were just scrolling through on the stream and I saw the one function name was so long that it was actually overflowing the blog area that they have there. It's got another 130 decay in the Long Function names. There's actually I think too that go. Yeah, this one. Fortunately people watching can't see the length of this one, but there is one that's a direct composition, C interaction. Tracker binder manager Marshall are removed, binding, manager reference room tracker if necessary that's just the so outrageous. How long it is? Yeah. Actually on some scale. So when I was actually reading this, um, I had the weight scale the website. It actually trimmed those off. So I didn't even see those extra characters by when you're in this sort of browser view, it's fine. But it is a bug with their sight of you have too long to fun class name, I guess. When 32k hacking back zi, zi zi blog. We have feels like a lot of Windows things have these like large names. Yeah, it's kind of a Microsoft philosophy to keep everything verbose and kind of like the little bit of Hungarian notation I see that. See and I know with a lot of dotnet stuff, you know, seeing the Hungarian notation Yeah, yeah. It's it's it's always nice to see those bypasses though of when it when they attempt an initial fix and it just doesn't doesn't land. Yeah, they actually mention that project zero and I had a statement that I thought it was maybe it's a different post. Well I guess I can't find in here. Might have been difficult. That just mentioned like 25% of patches don't fully patch the Earth. Sorry, 25% of O'Day's having complete patches. That's interesting. I don't remember, we mustn't have covered that. On the, I thought I saw it when I was taking notes for this episode, I thought it was here in this episode, but I obviously don't see it fair enough. So, it seems that seems higher than I would have expected. That's why I was thinking like yeah, I feel like, oh, remember that figure since I'm not seeing it here in this post, it very well might not have existed. So let's let's not treat it as an actual quote. Any more might be any, I might be crazy, just like my My 32-bit and 64-bit turn patient. If she's all right. But with pepper craziness, this episode. Yeah, something in the air with that said, we'll move into our shout-out section. So zi know, you have a shared at all. Let you go into that first and then I'll get in the mine. Yeah. So I had one shout out here. It's Psalm and if a big one here, but 21 Nails, multiple vulnerabilities. And XM, we've talked about vulnerabilities and XM before I don't remember, I don't think it was qualitative. Found them last time. R naught is a qualis. Yeah, well today at this time but 21 vulnerabilities is a lot of vulnerabilities, some of them are kind of interesting actually. Like there's some symlink of views using new line injection to get remote code execution stack based buffer overflow just by having a long file name there's he both a lot of vulnerabilities in here. A lot of really quick right up sub because there's so many we weren't going to cover all of them. I figured I'd just leave it as a
zi
Out