Episode 36 - Zoom-ers, VM Escapes, and Pegasus Resurfaces
< Back to post
This transcript is automatically generated, there will be mistakes
Hello everyone. Welcome to another episode of the day Zero podcast. You may notice we've picked up a fossil on our way to the podcast today anti is back with us. So I'm Specter with me as e and anti as well, which is nice to be able to say after a long while though. He is only with us for about 30 minutes or so. He's going to have to bail out and do some stuff. So, you know, we haven't for a little bit but not for the full episode but we've got quite a Content packed episode today in terms of topics. So yeah, let's just get right into the first one. So there was an article on Facebook about buying the Pegasus exploit So for anybody is from song go
if on Apple Insider about Facebook, yeah just clarify
that. Yeah, so obviously, I mean this this group NSO has been known to sell spyware in the past to nation-states and stuff like that. But the CEO of the NSO group basically said that Facebook asked to purchase the rights of capabilities. From Pegasus. So according to the CEO Facebook was interested in Pegasus because they had their own app called and overprotect Anova protect, which I think we covered before back when like know what the new we were comforted. We
covered on Facebook research which was kind of like, so an oval protect was banned from the apps Apple App Store and like somewhere in 2019, so to this happened this evening. Event happened with Facebook reaching out to buy some functionality of Pegasus happen in 2017, 2019 and overprotect was banned sometime last year. I think it was during the summer. It was also removed from the Google Play store which then led into Facebook research, which was that bpn that they kind of targeted towards or apparently or allegedly targeted towards teens and they would pay them like 20 bucks a month or whatever. Actually they might have targeted like a Team plus teens not not like 13 to 18. I'm not sure of the details on that but it is slightly different in terms of what type of information it was tracking things like that.
Yeah, so, you know according to the CEO they had went to by the you know by capabilities of Pegasus and apparently it was because they were worried that their app was less effective on iOS compared to Android which makes sense because iOS is like a lot more locked down in terms of privacy barriers and stuff like that than Android typically Facebook of you know, they kind of contradict this they say they're saying statement is meant to distract people from the fact that Facebook is suing in a so which they started like happy. Go for exploiting vulnerabilities and WhatsApp, but what I found interesting was they didn't outright deny the lake allegation. They just kind of said that oh, this is just to distract you from the fact that we're suing them but like, you know, it's weird. They don't outright deny it.
Yeah. Well, I mean, they probably it probably did happen. But the thing is it's unrelated to the fact that they're suing going to so group over the voice exploits and WhatsApp. Yeah, they're suing over one thing. It's kind of unreal a over the facto. Well Facebook also want to buy it is unrelated. So, I mean the fact that Facebook isn't denying get I mean,
I mean, it's hard to get Facebook the the benefit of the doubt after all the stuff they've been involved with
well, I don't think they need the benefit of the doubt. They very well could have tried to buy it. Yeah, I mean and there's not necessarily anything wrong with that. I mean people are immediately going to the facto. Well Facebook's I just tried twist I think anti had some thoughts on this. That's why we had brought him on.
Yeah. No, I mean, I guess I don't know. I mean I looked over the article and I thought you know, the two interesting parts were like the timeline, you know, like the 2017 when they first sounds like approach them to buy, you know, the specific stuff for a novel or only. However, you pronounce it protect. So I think you know what's kind of weird is, you know the timing I probably agree with Facebook and that they're probably Doing it just because they're being sued. So it's perfect, you know kind of cannon fodder but like I think the weird part is now that they're suing them like my weird Twitchy response is that maybe you know, if I had to guess the sounds like something that Facebook was investigating and probably heard about again the exploits that are available so on so forth and was like, well, how do we go and approach this group without seeming like we're investigating them, you know, so what better way than to go and kind of push for something that you're doing and go and say hey,
We'd like to use this to see if
we can further our capability. You know, some part of me just believes that it was like kind of a way to probably you know, still Leverage What they'd find from the NSO group, but also be able to fill those kind of like holes in their capabilities. Does that kind of make sense?
Yeah. Well and I kind of agree with you it does seem like you know, Facebook's a company that's already spending a lot of time trying to track you and trying to get information. Can you make yourself auntie? We're getting some background noise. So Facebook's already spending a lot of time trying to gather a lot of information. No doubt. They have a lot of very smart people trying to do. So so it does seem a little bit suspicious that they're coming to NSO group to try and track users. I like it just feels like this is something that Facebook should probably are probably does already know and that as you were saying there's probably another reason that led to them actually approaching anti so group and I think a very well could have been related to them trying to get like threat Intelligence on what and a so group was doing rather than it being relate to them actually wanting to utilize the techniques out of the malware.
So You think it's a less malicious in nature than it
seems. I'm willing to give Facebook that benefit of the doubt in a sense. That I don't think Facebook was looking to just start spreading spyware. I mean, I think that that just doesn't at least to me it doesn't really fit with and I'm not saying like Facebook is this completely trustworthy company or anything? It's just as a legitimate company. It doesn't seem likely to me that they were looking to actually, you know, just start using spyware and to be clear. They do only asked to use parts of Pegasus. Not the entire thing.
I mean it's worth noting that they also are worth how many billions of dollars you know, and their whole business model is built on you no not spying on user but collecting data on their user. So I'd be surprised to feel like they need to go to a group like NSO group to figure out ways to do that. You know, it just seems a little silly not to say that it isn't something they could have done but you know the more we look at it the more it just seems like why would they even need to go that bar? It'd be a perfect guys though to be able to say. Hey. Eat it for you know, the Privacy aspect we can't get through some of these controls but it is totally possible to to fare as well that they maybe didn't have that capability.
Yeah, I know but
I kind of feel like a lot of people are leaving Facebook with all the pr like, you know battles that they've had with over the past couple years of all the stuff they've tried to pull like with the VPN app, for example that we were talking about earlier. I mean, I think they are having a harder time now collecting the data that they used to be able to collect so it does seem possible that they would want to go that like extra step to try to get more data where they couldn't they can't get as much now as they could
before. Yeah, but this kind of monster was in 2017, though. Like this wasn't like for. Yeah, but approach that. Now this
is was still kind of like
Facebook had a lot of facts is because they still had their own overprotect on the app store which gave them a ton of access. It was in 2019 that Apple changed their terms that stopped allowing basically stopped along Facebook actually no information about like what apps are running and all that detailed information that they would have wanted to collect. So if this were more recent, but in 2017 I mean, they would have had a lot of this information just from the people already installing their application and I mean, could you imagine what would happen if it were found out that Facebook was legitimately just installing malware.
They've already done.
It is in terms of the Outlook from pretty much everybody because if you find out oh Facebook's
just you know, Facebook's
collecting a lot of information you're using their application. There's a I want to say implied consent when you start using Facebook, obviously, there's a ton of websites that tend to include Facebook. Like there's a lot of tracking and information that they're able to get without that. But basically as a legitimate company, I'm not expecting Facebook to go about breaking the law. And actually jailbreak some random person devices which Pegasus does being given a link through methods mentioned writing the document I using paying cyst, you know results in the Target device being jailbroken and the mail were being loaded to Modern steel data. It's one thing for Facebook to be collecting information. It's a completely different thing for Facebooked actually do so in a criminal in a alright.
Manner, but I guess my question is if they were doing this for thread until like you're saying I mean, how does this really help them in any way? Like, what are they going to do with that?
It's being aware of exploits being used against them would be one case
but I guess this was already known like
Pegasus was knowing not necessarily all the details in 2017. I'm glad assume that they Continue to update Pegasus. I can't imagine that they use the same exploits now that they use in 2017 or they'd be using the same exploits in 2017 that they were using in 2015 or 2016 or not me. This is something that NSO groups continuing to sell. I have to assume that part of this process includes, you know updates
but those Let's Plays don't affect Facebook. Like that's why I moved to sit up that we're
gonna so group was using attacks on WhatsApp though, which does
involve Okay. Yeah, that's true. It's good. Yeah fair enough. I'm sorry. I thought you were gonna say something. No, it's good. I think I was just about to see what he said basically is that you know, like wow, it doesn't directly go after them. Didn't they have capability to scrape, you know users information from at least WhatsApp and you know, they may not have told the public their capabilities right up the way so they may have had you know functionality to steal Facebook stuff and you know other products blowing into Facebook. So I mean, I just the simple part of it though is I find it funny weirdly enough that it ever agree with face. A book on the fact that you know, it's weird that they would sell me throw this out there when they knew years ago that it was such an issue. You know, I do find it weird. Also though that if Facebook was trying to you know, go and find more private information of their users, you know, it's like going to try and protect your house with security cameras are being like actually I'm going to go to the Colombian drug cartels and by AK-47s to protect myself that like, it's such a weird Step 2. It doesn't make sense.
Yeah. I don't know so obviously I don't
can threat intelligence. So perhaps you'll correct me on this but it feels like it would be kind of up their alley if say Facebook's internal threat team. It just finds out that an assault groups may be doing something against Facebook. Now, I'm actually going to walk back my prior statement. It wouldn't make a lot of sense if NSL group were actually like had exploits are earning Facebook that they would then sell those exploits the Facebook. So I'm going to walk back onto my statement on that. That odds are they weren't going for help. We're necessarily looking for that because if they were they would probably approach and a so group under the cover of some other.
Throat, sorry know what they could have. No, you think could have gone over it a billion ways. I mean the the idea of threat intelligence rights to to stop threats and like some companies like Facebook are probably a little fast and loose with it. So they come up with some koi idea of like a let's go and tell him it's just for this. I mean I if I had to guess the way they wanted it to pan out was that they were probably going to get in there first off. They may have taken the capabilities and may have very well done that to spy on users more and stuff like that, but I think it also could have one of those things that you know NSO group if they sign up Facebook as a client. It's kind of awkward to them be offering functionality to steal your clients crap. Like, you know, you can't you can't be like, oh well now our latest version of this Mauer that we've created steals Facebook user accounts or Whatsapp. Like it's probably a little more sketchy on that end. So they it could have been just a weird roundabout way to get them to stop targeting Facebook and it's you know other companies if you think about it
could have been I mean the thing is Facebook does have a lot of money. Throw around to your so that that kind of just ties in with Azure say like being a little bit fast and loose on that and we'll just come up with some idea and give it a try. I don't know how much money actually goes towards thread until with Facebook but given kind of the size of Facebook. I wouldn't be surprised to find out if there's a pretty substantial amount there. But honestly, I made it on a whole idea. I do agree with you on today feels weird like this is the timing friend is So group releasing that information, obviously, it's released as part of the Court. Court documents so it's not like, you know NSL group just like did immediately could but I don't know to me looking at from the outside. I'm not a lawyer. I don't know the intimate details here. It does feel more or less unrelated only showing that maybe they had a client relationship with Facebook.
It seems like they just want to get a shot in the
face. Yeah. Yeah, definitely.
Yeah, I mean obviously like a lot of the stuff. I've been talking about is speculation. We don't know like all the hard details. Maybe we will see more come out through that lawsuit like in the future. I don't know but yeah, it is like kind of a weird story. Maybe we'll find out more later. That being said, I think we can move on to some Zoomer stuff. So Zoom, is that some pretty bad PR over the last week and this is the first of many courses related to our Feast on zoom in this episode and part of the reason Zoom has gotten so much attention lately is do Self-isolation with you know current world events. It's being used a lot by companies and governments and lots of other places for meetings. So Zoom has had security incidents in the past. So, you know researchers have started looking at it more closely do it being used so much lately and these researchers discovered some issues and that's because Zoom seems to have some disingenuous claims related to crypto that are either not don't seem to be true at all or seem to be misleading
disingenuous or possibly just misunderstanding like written by somebody that doesn't understand that spot like Mark marketing may be doing a lot of fat and and I come at that just because they talk about some one of the things brought up in this is end-to-end encryption Zoom claims are has claims about their end-to-end encryption and like immediately following those claims. Zoom ends up talking about like TLS being used which TLS is definitely crypto, but it generally speaking like anti end is going to be a par from your standard TLS. I'm trying to find the picture. I'm thinking of in the document. But
but I think the other issue is only the key exchange happens over TLS. I don't even think the the actual packet exchange happens over TLS. My am I wrong on that or
I'd have to look into it. But I have to imagine that if the kazoom does have a web interface, so it's going to have to be over http.
So here's the here's the like diagram I was looking at so they have observing a test Zoom call. I don't know if it's scroll down for you, but they have like get ski over TLs, but then the meeting it says it's encrypted with AES over UDP, but
like, okay, so that's that's a separate
from it. Yes key is not like end-to-end encryption.
Yeah. No, like their whole end-to-end encryption is definitely wrong but seems like it's just somebody that doesn't understand what intend encryption actually is. And that could be intentionally misleading or could not other things that are mentioned in here are the they're using AES 256 in reality. It looks like they're using AES 128 and they're using ECB mode. If you're not familiar with the ECB mode actually they have the classic picture in this article classic picture is based with ECB mode. It takes a key and basically just applies. Is it to the block so a s is a cipher or block Cipher? Sorry, it's going to say Cipher block, but it's a block Cipher. So what happens is it splits up whatever your messages whatever you want encoded it gets split up into equal sized blocks and then the key wouldn't you could apply the key in several different ways. You could for example apply the key to the first block and then apply the second block to the following block and kind of chain them along ECB mode basically takes the key. Applies it to the block as so there's no connection between any of the following blocks? So what ends up happening is you end up with these pad if you have patterns in your data is two blocks are the same the resulting ciphertext will be the same. So that's only really an issue if you have data that's going to be repeating. So the classic example is this very much like uncompressed bitmap of the tux penguin. And you'll notice even when it's encrypted if you're able to see our video that you can kind of make out the outline of top so you can see kind of the original image or parts of the original image, even though it's technically encrypted. So that's why ECB mode is a problem. But the reality is when we were talking about audio odds are you're not having a lot of that sort of completely repeating blocks, you know, people are going to speak definitely the audio data is going to be continually changing. So there are things that could be Pulled out you can get information such as Cadence of packets kind of decadence of what's being spoken. There are definitely things that you can leak because of ECB mode but the risk isn't huge in a lot of cases with these female. Again, the problem with ECB is less ACB and more the fact that if you're using it odds are ECB is not the most significant problem in your crypto.
It's a code smell Birds the crypto
quibble. It's definitely an issue. It's generally an issue because we use it's the easiest block mode to use pretty much all the other options require some other settings usually like an initialization Vector that would just be like some starting value so that every every encrypted instant sins up starting different and so they'll always kind of be different even when data is the same. So you'll use an IV for that. They need some further data. Whereas doing ECB mode. You need a key and you need the data. It's kind of that classic what you think of crypto if you don't really understand how crypto works. It's like you think that's what you need the data. You need a key. So people who end up use that tend to be the type of people who don't really know the difference. They just know what they want. They want this crypto therefore they use the easiest thing
available. Yeah, so there was also another issue that they said they discovered in the waiting room feature, but they're not disclosing it yet because they want to wait until it's fixed because of the risk, it could pose to users but they said they are going to disclose it when it's been fixed and everything. So we may see that on like a future episode we can go a little bit more to the specific specifics of that issue. I think with the most worrying part of this article was was the Shady bindings where the keys were actually being sent through.
servers even when none of meeting participants nor the company hosting the meeting were based in China. So they give an example of a test between a user from the US and from Canada and the key ended up making its way through to one of the participants through Beijing server. And what makes this really suspect is trying to can make it so that Zoom is legally obligated to disclose those keys to Chinese authorities. So I which
is the beauty about when you hear the key going through there and we know it's not I intend encrypted and to be fair I am going to mention so I've used Zoom a fair bet well prior to this. I never realized I didn't even realize they made the claim of being anti and encrypted. Basically, they can't be anti and encrypted and I will mention that out. Right like they can't do EET simply because you can call into it. Like literally call it was so because of that. There's no way you could do end-to-end encryption Zoom needs to be able to decrypt act in order to let people call
in. Yeah, but considering Zoom is being used by governments and stuff like that. You know, the fact that those keys are being routed through Chinese servers when nobody's in those meetings in China is like I think it's probably the most worrisome part of the article.
Yeah. It does seem a little bit just like a poor routing Choice rather than necessarily A designed to go through China that said it's absolutely sketchy. I mean it could there could be a benign reason and apparently Zoom has updated this such that it no longer.
What do you
got to but
is there no way they could like? I don't know do a few more test cases just to show like is only one time and then they showed like okay, it's going through China or you know,
it's the thing. I think they had to make multiple attempts until one actually went through trying to it wasn't every key going through China. They don't make their claim. Isn't that hard?
Okay. That's all I'm saying is like, you know, if it was every single key than a big. Wow. That's
oh I'd be way more concerned about it. Then goes every
to like, oops did we And every key to ourselves damn it, we do that every time like, you know that I would have been like, all right.
Well, so it seems like all the keys originated from Zoom. So it's not even sending it to themselves because they generate the key.
Yeah. So I mean, I feel like the most creepy part. I mean if anyone wants a good chuckle to themselves the thread until twittersphere has fought itself over and over again the past couple weeks about like if these things are truly bad or if it's a bunch of people just being a holes, you know,
because there's absolutely Cloud chasing way. It comes to zoom right
now. Yeah, we'll get into that a bit later. I'm sad that I can't be there for it. But I will say I've seen it all over the place on that. I do feel bad that while I don't trust Zoom my company even pushed out that we all had to uninstall it and we migrated something else, you know, it's kind of fascinating to see people pile on it. But, you know at the end of the day like what other things can you use at this point that has any consistency? Like I know people mentioned like Gypsy and other I think it's only when I really hurt
Curiously, what is it Apache open
meetings? No, Apache. Oh, I mean yeah, I'm sure that's a very crowd favorite nowadays. You can Google Hangouts. I mean, you know, did we, you know, you
just well, I mean if you're consuming exactly
that's fine, you know, but I mean which ones really worse I did like the I did like the photo from the team locks changed over to the ECB mode. That's probably the only like good example by its weak, but I did want to point out. It's a lot harder to reconstruct. Audio, then it is like a photo or something like that that you kind of mentioned zi like
yeah that tells getting at doing audio like this sort of attack isn't huge against the out. There are other piece of metadata information you can pull out but the key issue with ECB is really when you have this sort of repeating
data. Yeah, so now I just thought that was a good point. And so hopefully, you know, the only anecdotal thing that I thought was the best of all of it was that the was a Boris Johnson. He was him and his crew were having a nice call of resume, but he left the zoom meeting ID for everyone and you know besides all the zoom bombing nonsense. I thought that must be pretty good if they are really scraping all these keys and decrypting Converse conversations that it must be fantastic just to have some so easily. Where to target the mat that must have been a real Joy.
Well, so I believe the meeting ID was tweeted after the meeting actually happened that said obviously meeting ideas can be
but I believe it was true. It was tweeted out by somebody else. Obviously, I wasn't actually tweeted out by Boris Johnson was taken down by yeah, whoever, you know, whoever does social media where it it
doesn't have to be you. It's always Be part of your crew, you know, it's the one that seeks, you
know, I just mean like with that. It happened after the fact that happened after the meeting right? I don't believe it happened during maybe I'm mistaken on that. But I remember it kind of coming up on its own.
Well, I'll before I jump off here. I'll give my final resting statements on it so that you guys can crap on them later. Well, I did want to ask you one more thing anti if so, would you be able to stay on for like a few more minutes? Yeah. Yeah. Okay, the the people who kidnapped me will take the gun away.
sign okay but yeah, you can give your finishing thoughts on that, but I did have one other thing. I wanted to bring up with a different topic as well. But yeah you go ahead. Yeah. Sure. No. No, I'll let you if you have any other questions on any other topics for I go I'm happy to answer them. Um, no, I was just going to say, you know, since the people of crap Emma so much Zoom for a lot of reasons sucks and I don't like you I don't trust it. But at the same time like I do feel bad that I guess in New York, the things the governor for schools that they can't use zoom in. Anymore because of the concerns but I mean, I haven't seen like a of all the campaign's that I currently wash and you know, I don't see anyone targeting Zoom the way that people will blow it up. So I feel bad genuinely that even as a Chinese company, they probably are sketchy a little bit, but I don't know if they definitely deserve like the widespread hey of like, you know, these guys are still in your keys to crypting a conversation. Like I doubt they care really and I think for the most part they just didn't expect so many people to start using it, but I don't know I just feel bad.
Yeah. Mentioned actually they have blog posts with just their insane growth recently. I forget what the numbers are but it was very significant usage. Now. That's that I won't mention they own some Chinese companies. I don't believe I believe they are like Zoom itself is a registered company in the u.s. Though.
Yeah, they're us company and they own some Chinese companies. And so, you know as with all analyses if it is if there's anything China on it, it's like almost be like a Chinese apt group or something. Well China hasn't given any like the Chinese government hasn't given anybody any reason to give them the benefit of the doubt with any anything they're connected with so never heard them do anything bad to anyone. Okay, you can take that to the bank. Please. Don't kill me how much money get paid. But anyway, I'm gonna do a last-minute topics which are just because you do have to go here in a few minutes. I wanted to bring up the Mozilla advisory. So there's not too much to talk about here because the technical details are actually locked off, but there were there were some security vulnerabilities that were About Firefox one was a use after free through race condition and the NS stock shell the destructor and the other one was another race condition yielding use after free when handling a readable stream. So, you know, these are obviously they're both Firefox issues. We don't have the technical details and I think those bug references are probably going to be locked off forever. I don't think Mozilla ever opens them up to the public. But what made this notable was they noted that the targeted attacks. There's targeted attacks in the Old that are abusing these vulnerabilities and I think I thought that was interesting to see on Firefox because typically when you see that notice on browsers, it's Safari or Chrome because those are the default browsers on mobile operating systems. So I think it's interesting to see targeted attacks hitting Firefox where Firefox doesn't necessarily have the greatest market share when you're talking about browsers. So I was wondering anti do you think this could possibly be targeted the tax on Tor users? Because I know Tory uses Firefox under the hood right
yes it does Tor Browser does technically you create you can use other browsers
yes by default the Tor Browser that everyone gets is using Firefox has its base right now are so you're asking specifically if it's likely that is towards towards users or just in general targeting people yeah I was just wondering if like you think that these attacks that are using these vulnerabilities could be going after like Tor users it's like you know slow let's say journalist potentially using a tour for anonymity or something like that heylia dude are you kidding me I mean now I can't say Let me clarify first and foremost I can't 100% say right now that that's happening because I haven't no one obviously if they're being very hush-hush about it this happens every time with some of those well there's a campaign out there like what does that mean who is it you know so I don't know a hundred percent what that means in terms of this the specific one but you know it would be silly and my eyes for any nation state especially some of the more One's targeting various individuals right within the dark web who use Tor Browser and stuff like that. It would be almost nonsensical for them not to leverage that right now assuming that that's what this lens its capability towards right targeting users and exploiting them which it sounds like it does. You know, if you look at most of at least from my understanding the targeting of Tor users a lot of the exploits haven't even been sophisticated or relatively new right? It's always been people using outdated browsers and should I don't think for the most most part you're going to see like a lot of exploits getting dropped on Tor users that are relatively new may be attacking the service itself. But for this specifically to answer question. Yeah. I mean if it's something that targets to our Firefox, which is what's or browser is based off of I would be surprised if they decided no unless they've got so many exploits available. But even then again, it's just something to add to your Arsenal, you know, so I 100% it would be an easy just addition to anything that's assuming they can get the information out there. We don't Enough about the campaign and what details are out there, so it may not even be applicable just yet. If it's something that was only partially observed,
you know. Yeah, which is a I think Fair Point on that. I mean we can't know for sure who was being targeted with it. I think it's a reasonable to say that the Tor Browser is one of the juicier Target. If you are writing the eau de using it in the wild that said the Firefox users can also be generally are going to be a little bit more technically oriented. So it can also just be a position thing where maybe the Firefox user also happens to be a sysadmin with Access to more stuff or other reasons why you might talk that so it could be unrelated to
tour. Yeah, okay fair enough. I was just curious on Auntie's take because I didn't know how common those types of attacks were. So yeah, I just you know, I wanted to get that in before you left because I imagine you probably are you going to jump out now anti or are you good for a little while? I'll probably jump out now but yeah, I mean summarize yet, you'll see if it doesn't get released. I don't think you'll use seeing used against a lot of average users because most botnet owners and people, you know, targeting regular people often need the actual POC code itself, so Yeah, no, we know in the targeted. So that's it from me. I'll hop off and pop to listen later if you guys crapping on the other people.
All right, thanks for joining
us. Thanks. Y'all have a good one. All right, so we'll jump into the bug Bounty platforms topic. So we have a topic by CSO and it's basically saying bug Bounty platforms by researcher silence violate labor laws critics say so, you know one topic we've talked about a lot on this show is bug bounties and You're not disclosure should be allowed and generally I think we agree. It's reasonable to some degree. The vendors have the final say on what a report goes public or if it does at all after it's been reported through Bounty program. Well, it seems that's being challenged. So your
I at least want to say I think and I want to be really clear Ideal World full transparency. Everyone has bounties Look at that for a sec you could have night in. Yeah. So in an Ideal World will have full transparency all vulnerability reports become made public and everybody has bounties that's kind of your ideal case, I believe but we don't we get there step by step. We're not at that point yet, obviously, and actually I will I will step that back a little bit maybe not every company has bounties. I do believe bounties or some that should be introduced once the Once your security is fairly mature, you don't want to just have somebody. Oh, I just wrote this over the weekend. Well, hey, here's you know a hundred bucks for every bug you find and like, you know, put them out of
business. Yeah, you end up throwing money away.
Yeah, like it's part of the maturity process to end up adding a bounty end. But Ideal World is generally speaking one sermon speaking about mature companies and companies that have a mature security process vulnerability should be disclosed. Publicly and and and they should have a bounty and I do also want to adjust as we get started on this draw really quick distinction because we're going to be talking about like nda's and agreements before you're even able to start testing itself that there is there's two kind of types of applications. You could be testing. This is a very wide Description but either an application that is hosted by somebody else. So if you're testing Facebook, you're not self-hosting Facebook or PayPal or something like that. You're not self-hosting those things. Somebody else is hosting its so in order for you to test you need their permission. Otherwise, it's unauthorized access and the Computer Fraud and Abuse Act and the u.s. Is insanely vague when it comes to actually defining what unauthorized access is Although I think actually some of that may have changed recently. There was something about the terms of service not being is binding. Although I know there have been cases where terms of service and basically, well you violate the terms of service. Therefore all your access was unauthorized than it was Computer Fraud and Abuse self sudden, I think something about that changed. I don't follow a lot of the legal cases. So maybe I'm wrong, but generally speaking. I'm just going to be speaking under that assumption unauthorized access. If you don't have permission to be hitting that hopes that application that's somebody else Helsing the other type of application. You can be testing our self hosted things things that you could run locally if you want to test Android if you want to like work on your Android phone you could do that or Linux your operating system running a web server locally. You can test all that kind of self hosted you can do whatever you want on those service. It doesn't matter what the ndaa As you're breaking no laws by doing that local testing. Well, if you violate their dmca can come in there are some things but generally speaking you're not violating any laws by doing that sort of local testing. I just wanted to bring that up because when we talk about bounties most of them are these hosted applications, some of them are self hosted and I think a lot of what's going to come up really has to do more with the hosted applications, which is most bounties that you see on The program like cat or one or bug crowd.
Okay, so bring up the point you were saying earlier with like in an Ideal World. Everyone would have bounties. It actually seems like some researchers are challenging that a little bit they're pushing for vulnerability Discovery programs instead of bug bounties. I have if there's a little bit of like a different term like set of rules with that. It's a bit more like you can't Outsource it. I think are you
sure about that? Because I thought it was vulnerability disclosure. Programs not discovery, which is a different part of the process because they talk on V DP which is vulnerability disclosure
policy. Sorry. That's what I meant VDP.
Okay. So VDP that was slightly different VDP is and this article actually I think just makes a mistake kind of conflating VP with coordinated disclosure if EDP is just the disclosure program. So having like a Security Act email address or the secure. T dot txt file having a way for researchers to reach out and disclose vulnerabilities to you in a safe way in a way that they're not going to be charged with a crime for doing so that's what the VDP is. Okay. So that's a little bit different bounties separate from that. And yes, every company not just mature companies should have a VDP. Some people do have questions about the bounties will be getting to that later on in this topic. But I guess when we just start off going through in order here, one of the first things that comes off our first issues really raised comes down to the Safe Harbor and NDA. So like I just mentioned reporting a vulnerability you kind of end up in a case where if you report the issue they can then say well that was unauthorized access. We're going to charge you, you know your going to be prosecuted for that crime, even though you're trying to do the right thing by reporting vulnerability and this is definitely happened. And I think I've talked about my views on disclosure of ought over the last few episodes and I think a big part of kind of my weariness and like always deferring to what the company wants comes down to kind of having had my start and the days where people definitely were being resident. They weren't operating in the most Adamant manner, you know testing for SQL injection on every website you visit type thing. But you know, they'd report that and then get charged with the crime for it there. They're testing definitely wasn't. Wasn't in line with what would be considered responsible and I'll be fair to admit that but at the time basically there was no option. Now. You can run vm's you can do with law without needing to actually test on somebody running a real server. There's a lot you can do but you know still for teenagers in the late 90s early 2000s. It was definitely a lot harder to do that and at that time people would test and report an end up charged with I'm so I think that's kind of where some of my views on just letting the company do whatever comes in. So now jumping back to the safe harbor Safe Harbor is just that you won't be charged with the crime. What's happening with bugcrowd hacker one with private bounties is that you'll basically be told sign this NDA you'll be added to the private Bounty and then you can report your vulnerability. So you can talk about the show before you can disclose or they basically set a gag order on anywhere. You can't talk about here issue if you want to disclose it to them. Or if you want to do the responsible thing and
yeah, basically trying to punish people for for going the right way. Well, so
here's the other thing though and it is worth noting like most vulnerability assessment have an NDA, you know real world assessments when you're working for a company that's been hired to do an assessment. You have an NDA. I can't, you know talk to you about all the vulnerabilities I found in whatever clients I've had. Now there's an NDA there like that is a pretty standard process. It's just like you don't want to effectively humiliate your client so it is different with these. Well, so again that kind of comes down. I definitely there are cases where I don't know. I'm just trying to think here if I can come up with a good example where perhaps it would be a good idea not to publicly disclose or at least not and when we talk about it publicly disclosed generally where I'm assuming I'm assuming things have been patched and have had people who've had time to update. And in that case like know as I did say earlier, actually, I do think all vulnerability should eventually become public. So there's no case where they shouldn't at the same time when a company is hiring, you know, a pen testing firm. Any funding is going to be damaging to their reputation. No matter what so I can understand the company not wanting to expose that and I want to be yeah, we had these vulnerabilities and thus they're hiring researchers do it at the same time. I mean, yeah, I would like to see those public but I get why company wouldn't want that out there, especially if it's pre-release products. I know for me A lot of times we'd be brought on kind of during the standard release cycle. So it's like we are about to release this get the security testing on it just before it actually gets released. So then those vulnerabilities never even would've made it to the public at
and I think I can at least Understand if it's part of your kind of secure development lifecycle. not publicly disclosing those
vulnerabilities I think part of the reason that like the non-disclosure like security researchers taken issue with the NDA is they'll basically use it on researchers who submit issues so that they can't talk about the issue publicly. And like you said, they can't damage the company's reputation
was so that's a little bit different and external researcher versus being the hired by the company to do
okay now is make certain point like talking about external
researchers. This is I was making the point that most assessments. So do happen under an NDA. Most okay assessments aren't bad. Well, I don't actually know the numbers to compare like literally like our most requests generated by bounty hunters or by hired can hire Consultants or whatever but generally speaking there are a lot of Assessments that do happen under an NDA that are just they're hired to do it. It's not under some sort of public bug Bounty that said there's a trade-off when it comes to public bug bounties. And I'm going to and I'm going to say like I don't agree with having an NDA to report a vulnerability. In general, I mean, I think more Andy like a timed NDA where it's like you can talk about this until it's patch or until we reject the issue or something. I think that's fair. Like, you know, you can't report this about our can't report this bug to us and then just start talking about it immediately give them time to fix it, you know, basically Force coordinated disclosure. What do you disagree with that or kind of
agree? I think that's fair. But I think what the researchers are arguing is they're basically saying, you know, once your important issue to them, you can never talk about it and what
yeah, and I wouldn't agree with that I'm saying like as long as there is a timeline for
disclosure. Oh, yeah, exactly. But that's what they're arguing is that you know, they trust
so oh, yeah, they're arguing lurpak. You can't what kind of
Harkens back to is issues that were reported last year by Jonathan list Sookie. I hope I'm saying that right and it actually relates a little bit. It to zoom it was issues found in the zoom client. So we tried contacting them to get the bugs fixed and he offered them, you know, the standard 90-day disclosure deadline to ship a patch and then you know after that 90-day deadline he would disclose it to the public and they kind of they didn't want that. They wanted him to accept the Bounty and sign an NDA that would have forbade him from you know, publicly disclosed in the issue and I think the reason that researchers are wary of that is they're worried that they're basically Going to by The Silence of the researcher and they're not going to fix the issue or they're not going to take the issue seriously and fix it like way down. The line after could have been abused by, you know, malicious actors during that time period so I think that's why there's like more of a stigma around bug bounties why some of these researchers is this like you're buying my silence and not actually fixing the
issue. Yeah, and I'm actually going to say like, this is something that I think hacker one is in a really good place to change. You know if they so I again get you get there and steps getting to that fool complete transparency all the time you get there and steps. So what I could see is like having a plant, you know, a company signs up with hacker one because hacker one does offer a lot of benefits to come near homes having the bug triage having some of that so they're not getting all the garbage report and then they get a little bit of tension if you offer a bit of money, there's and how One just makes it easy for the company's so hacker wants in a good place. So to also start pushing those companies towards a better model where their disclosures and hacker one doesn't do this, but I think something hacker one could do is a company signs out with them. Okay, you don't need to disclose anything in your first year with us. But in your second year, you know, you need to be disclosing, you know, all of your critical vulnerabilities. Or you know and in your second, or maybe maybe they'll go the other way like but having some sort of step-by-step process were over time. They're going to increase the amount of disclosure. They have and working them up towards that I think that's something hacker one could do they don't do it, but given the position that they're in they could easily leverage their position to walk to do something like that, which I think would push security forward in general. I don't just want to say well bounties are all bad because Some companies abuse and EA aspect.
No, you don't want a tart like tard all with the same brush, but I think like, you know, they're arguing that perhaps a better system should come along but that being said, I don't know what that system would look like. You know what I mean?
Oh, no, I think what I just suggest that would be at least a fair way to go about it requiring companies to slowly start disclosing more and more. Leading them up in that maturity
process. Now one thing I did want to touch on is one of the more interesting points, I guess of the article and take keep this in mind like anybody listening like neither of us are lawyers. So this is going to be you know, if there's any lawyers listening they may be like these guys have no clue what they're talking about. But anyway, they talk about the legal implications of bug Bounties in California. And with the EU with GDP are now I think the the California one's kind of silly. It's basically Doing on minimum wage and I think the argument is just dumb. Yeah
with the so here's the thing and I just kind of feel like bug bounties. They kind of related to being like Uber driver or something like that that gig economy and Bug bounties are less like that and more like a contest the skill-based contest. Yeah where you're finding bugs something that requires skill and you're getting rewarded for doing so it's a contest. And they're not I would argue. It's a contest and I think that makes a
difference. You're not an employee. You shouldn't be getting paid a minimum wage for bug hunting like they're kind of suggesting there. I think the more interesting one will though was the gdpr point and I think what they were trying to argue there was like users have the right to know if there's a potential breach due to issues that are found and that should therefore mean that those issues should always be publicly disclosed after their fixed so I was kind of curious on what your thoughts were on that.
Yeah. So a lot of bounties that will include information are will include terms about and I mean with all that were saying, you know, we're not saying that companies should just allow all testing to happen. There are still limits on what you can do with your test think so one of those terms of generally get set has to deal with not trying to access or being careful about the type of data that you do access you're not allowed to, you know, get on a bug Bounty program and just just try and dump the entire
database. Hey you when you do when you do
that testing will usually create our least when I do test saying I'll create two accounts for every privilege level at a minimum, perhaps more. It depends on the application. So generally like you're trying to access another user's account. So you access your own accounts like another account that you own rather than just random client data, and I believe that at least kind of helps when it comes to the GDP are because you still shouldn't be exposed. Seeing actual client information you should be pulling down actual, you know credit card information or something like that that said that Specter was saying I'm not a lawyer perhaps with gdpr that doesn't matter and just if you have a vulnerability that could have been used to breach this data. That's all that's needed that said I would agree though with the statement that if a bounty hunter does pull out personally identifying information does full any sort of that information out. That should be you know that That is a data breach bounty hunters are not part of the company and shouldn't count as part of the company in that sense. So I largely agree with what they stated their whether or not the regardless of the legal sense. Like I largely agree that it should be counted as a breach if a bounty hunter actually pulls down that sensitive information intentionally or
not. Yeah, okay. Yeah, I mean I kind of agree I wouldn't be surprised if we see gdpr come into bounties a little bit more just because I think gdpr was like intentionally left very big so it could absolutely be brought into like this area and be applied
if I mean, it's responsible testing as a researcher. You should not be trying to pull down the entire database. You should not be trying to just pull all the information you can. I think we kind of talked about you know researchers behaving and a badly in it some time ago. I don't remember what the report was, but I'm sure it's come up before.
Yeah, I know we talked about it when there was a guy was trying to report an issue through Twitter and then he basically just like Traverse the network and writing deeper. Yeah, I forget what episode exactly but it was fairly recent. It was in the last couple of weeks. I wanna
see. Yeah. Well, no, I think it was longer than that. But because well, what was that? Was it some sort of? It was like clothing or something.
It was actually it was directly in the title. So you can find a pretty easily because it was the title was a dark white hat hacker or something like that. So our episode episode 18. Oh, wow. Whoa that long ago.
Well, I think my notes to double-check
Knows episode 30. Okay, so
here that I thought that just the first steps that they came up. But yeah, it's this is one of those things where as a researcher. You still need to respect, you know, because it is live client data when you're testing on a lot of these bounties you're testing on the live web sites live applications. Sometimes you are testing on test environments. They do provide test environment. So that's a little bit different too. But you do have to be careful with your issues, you know, you don't they generally don't allow you to test it out of service issues. You're not allowed to bring down production just because you're a white hat So you
claim? So I think we can move into some exploits continuing our Zoom Feast into the exploit section of the show. We have one here related to UNC paths and how those links are rendered to be clickable which can allow people to steal ntlm hashes. So for those who don't know what UNC paths are there based Universal naming convention is what it stands for and it can be used to access network resources and whatnot. It's used by Windows.
Yeah. Basically you've seen this when you see the double backslash followed by like a server hostname or IP. Something. Yeah, those are UNC paths.
So generally speaking as for example,
yeah, so when you try and access one of those URLs on Windows, it'll send your credentials to that endpoint. So in Zoom when you send such a link, it'll create our when you send such a yours UNC it'll turn that into a link you can just click and then windows will do its default thing try and open it. And it'll send your ntlm hash over to the Target server. That's the actually it's kind of an issue with Windows. Just sending your authentication by default. Although it is kind of a sane move to send. Credentials you can turn it off by default. I'd recommend doing that and then you just have to manually basically get it to send which I think would be a more sane default, but I kind of understand the history here. So Zoom make some clickable. They're calling this a actually I saw this
talked about quite a
bit. I mean this was blown way out of proportion. That's that's kind of what I was going to find a way to say is the gist of this is don't click random links, especially not random links that are UNC paths. I was actually I was on a Discord. We're in the staff chat. They actually asked like do we want to you know, add everybody about this because it's such a huge issue. It's just like one they don't normally alert people about issues at all to Don't click links like this. Isn't that big of an issue? Don't get me wrong. And actually I will say like for Zoom doing it like zooms. You said a lot of business contexts where you're going to have these sorts of Link's flowing around. It makes sense to make them clickable. I would put this moron I think window should maybe do a more same default. By not sending them by default and just you know prompting forward or something. Like I would put more fault on Windows. Just having this is the default rather than on so making it a
link. When I saw this issue I pinned it as basically a glorified phishing attack while you could argue Zoom could try to mitigate the issue by not making UNC paths clickable. It's ultimately a user issue that people need to be made aware of where like, you said, you shouldn't just be clicking random links, especially where like, you know, UNC path links are going to look more suspicious than your typical link as well.
You don't really want to to be fair. I just don't trust users like just random non-technical users to get that. I get in all fairness. I just don't I agree with you like the end. It's such a simple as you to avoid. I just don't have enough faith in random non-technical users on to learn to recognize a UNC path and oh, okay. Don't click that. I mean no doubt they could but I just don't have a lot of faith in that one.
What I think would be reasonable is for UNC path links, perhaps they could add like a warning dialog on clicking it saying what's that sound? I think this is exactly what it does. And do you still want to continue with it? You know what I mean?
I mean a lot of users are just going to click over that. What was that? A lot of users are just going to click over those sorts of warning like oh you're it's the same thing as those external warnings like oh you're going to an external website, you know, like discordian gets the, you know, external links or spooky or whatever. That's Trevor that warning is like people just click through those alerts if they get them too often. That's why I feel like it would be better off if a Windows had some sort of prompt by default be like, hey, either you're sending this over the Internet or even just over the net. Well, I guess it's going to be over the network, but you know, perhaps even trusting the local network or you know, if you're on a VPN it's going to appear local at least. Versus over the Internet like just sending it to these external hosts. It just doesn't was I said before and insane default. But yeah, this issue like this feels like kind of mentioned earlier about the cloud chasing. Yeah, that's what this feels like just kind of putting out there know to be fair and tlm hashes
So basically, they're getting your Windows password. I want to fit is the gist self kind of what's Happening that that is an issue. Like the root of the issue is definitely a serious
I just don't think it's zooms real fault here.
Yeah. I don't think they bear like a lot of the blame. I think they could try to help they could
do more and I believe they've just
disabled UNC paths now is being links. Yeah fair enough.
Which I mean Fair response to it. I just I agree with you when he said it's been
overplayed. Yeah, so I think that is our last ours our last same
topic. Nobody's got one
more Oh, no, you're right. Our next one is actually a zoom topic
to actually to vulnerabilities in this one. Yeah, this is two things one is the local privilege escalation to root on I think both of these. Yeah, both of these are on OS
X so they're both Mac OS based issues.
Yeah, so Both of these are LP2 wrote and code injection that gives you access to the microphone and Camera. Basically, they just discovered and this was another one that I saw getting mentioned a lot witch. It's at least I'd argue about it one. It's more on zooms all it's more. It's more zooms fault and maybe a little bit more useful in terms of actually doing something with it. So the gist of the first one though local privilege escalation is that the zoom installer uses this lovely API authorized ex came with privileges. Basically just a way to execute with enhanced privileges are execute is roped. The API itself is documented as being insecure shouldn't be user. I believe it's even deprecated on OS X but it doesn't validate the binary. It doesn't make sure it's signed or anything like that if the binaries read-only are can't be modified. There is otherwise protected it's still kind of okay to use it. but but with that it doesn't validate so if somebody could change the binary. They're basically to give her a note Zoom. The binary of Ron's is basically a bash script run with route. So during installation that run with root file as a period of time where it isn't protected as the installer unpacks the package file into a temporary user writable location. So somebody could exploit that by waiting for that file to eventually exist and be created and replacing with their own content and Then the installer will eventually end up running with roped. Whatever they put into that she'll file. Yeah terms of go ahead.
So I'm just going to say you are you were cutting out a bit for me there. So like I apologize if I re it reiterate over an issue or you already touched over but part of what makes these issues a little bit more impactful to is it was actually discovered that the Mac OS installer would perform the install without you ever even clicking install because it abused pre-installation scripts. So on its own that shady but, you know, not necessarily malicious and But how it also gets that route privileged on the script that you're talking about that could potentially be hijacked is it'll prompt the user for root privileges. If you're not in the admin group, it doesn't prompt for root privileges as itself. It says system needs your privilege to change. It's almost like intentionally misleading to make you think that the Mac operating system needs your permission not zoom itself. So there's some shady issues on top of it. They kind of make it like even worse than that. I'd have been as well. But yeah, sorry. Yeah.
Well, I was just going to add like actually trying to exploit this though is basically
You need local access
what you need local axes and you need to wait for that pre-installation Tappan. Yep, so you're not getting access to this like it takes a lot of patience for malware to eventually get this. And with that, I mean that definitely limits the usefulness of it in the second one here in terms of its code injection that gives you a mic and camera axis. It's effectively dll injection own OS X OS X does provide a hardened runtime Library validation which prevents prevents you from loading either are only allows you to load Apple libraries or those that are signed by the same team ID as the application itself. You can disable that which is umm goes ahead and does and therefore anybody can inject their own Library into the zoo application and then get access to the camera and a get around something formation checks the gain that one's definitely you know on zoo. Yeah,
so I definitely wouldn't call this one Cloud chasing like the last issue that we covered know
this this feels this feels a lot more. It impacts a bit limited. Yeah, but these are legitimate issues on Zoom that Really just shouldn't have been there to begin with the you want someone I could understand why they made another link especially if they didn't understand what clicking it does. You know, it makes sense making that a link this doing all this weird stuff. It makes like you can't you can't just explain it away as well. They should of our they didn't know any better. They should have known
better as the devil's advocate for it. Yeah. Yeah, so
So yeah, I like and their local privilege escalations. Yeah, it requires local access and requires kind of waiting for the right event to happen. If I think that's completely
Fair it does get you rude. Yeah, like it's not like it's you know,
yeah, I mean like Mel were often exists on systems for quite a while. So if this was just like, oh, we don't have any other if it likes Kansas system looking for some vulnerabilities doesn't have anything else it just kind of sits there waiting on this. Like I think that's completely fair. So like there's definitely an attack scenario with this that wouldn't be just prevented by the user being smart about what
they do. Yeah,
so that is our last Is umm issue though. We can that feast
and yeah, well move away from Zoom. We have a zi zi blog post about a use after free in the VMware Workstation DHCP server, which was submitted to them by an anonymous researcher last month. So unlike some of the zi zi stuff we've covered in the past. This isn't like a working backwards from like a patched it for anything like that. This is an issue that was submitted to them. So the article first does a bit of a dive. Into how the DHCP release message Handler works and the gist of it for understanding the issue is it will end up copying the data from one leash structure to another so like source to a destination and it thusly structure contains various information like the client IP address the hardware address and a unique
identifier. Yeah, and just and that's the supersede least function as you're saying just copies from basically the old one to the new one, but I think the name kind of met matters are in terms. Understanding what's happening whites copying the data in theory. These are generally going to be close to the same Lisa's some different information.
Yeah, so when they go to take this new lease to write it to an internal table. The superseded least function does a string comparison between the uid fields of the source and destination lease objects. And if they match it'll free it to so that you know, so that the same identifier isn't duplicated. However, if you send DHCP discover and DHCP release messages repeatedly, the uid sort fields of the source and destination Leafs will point to the same
memory. Yeah, which feels like a weird. Issue to have like this feels like some sort of race condition that
results. Yes what I was thinking
too. So you don't worry. They're not clear. They literally just say when those two messages like DHCP discover followed by the release messages repeatedly received this happens. It doesn't really go into why that happens, but eventually it does happen there. Well, yeah, I mean, I'd love to know but I guess if you're writing the exploit, it doesn't really matter. You just know if you smack you get this to happen, you know, you don't need to dig into it. So I get why it's not there. I'm just definitely curious on really how that that in particular
because I don't know that that just feels like a really weird. A weird thing to have happen weird coincidence here. Yeah, so probably some sort of race condition in there. Yeah.
So of course where you know both the source and destination you ID Fields point to the same memory location, the string compare will succeed and the buffer will be freed. But because the destination lease still points to that location, you have a dangling pointer and that leads to use after free. So the patch for it was very simple VMware just out of the check to make sure the source and destination. You ID field didn't point to the same location before calling the free so very simple patch the article didn't go into detail of how to exploit this to perform the VM Escape. They mostly you know left that as an exercise to the reader, which I think that gives a neat opportunity to anyone interested in the M exploitation. If you're interested in that I'd recommend giving this write up a read and perhaps trying to exploit the bug yourself. I think it's a really good candidate to get your feet wet with because often there's not a Other than throw out there about VM bugs and this post gives you detail and sort of a root cause analysis
but leaves enough for you to work through the rest eat exact get tickets on quiesce you and yeah, it's a starting point for you to do an actual exploit, which is ultimately like you you can go and learn all the different techniques you want. It's doing things like this actually writing up a real exploit. Even if you start off with this information that doesn't take away from actually turning. This into a weaponized exploit and that's how you learn. It is by doing it. You can learn all the techniques. But once you've done that you just need to get the practice actually doing it.
Yeah thing a question from chat from cat 3009. Could it turn out to be an exploitable? Honestly, I doubt it something as complex as a hypervisor or like, you know a virtual machine. Use after free is an extremely powerful bug-type and especially, you know, it's the VM is not Using CFI or anything like that. So I think it use after free. It's almost certainly going to be exploitable. Only thing that could end up being an issue is if the object you're freeing if you can't get a good spray on where that is on the Heap like if it's in a heap cash that you can't get a good degree of control over that might make an unexplainable, but for the most part use after fries and complex software are like, you know, they're almost always exploitable. So it probably is
Yeah, well, so on that no. What's what do we have? Like the object being afraid is that uid? So that's just the string. That's going to be used after free. So it is going to kind of come down to how is that being used once you are able to obtain some control over its
value? Yeah, you probably need to use it to create like, you know, propagate some bugs later on down the line that are more powerful to exploit. But
yeah, I'm just I'm just trying to imagine what your bug would be from controlling the uid. I guess it's going to use that probably to track probably doing some look up based on it at some point.
That's true. Actually that that's a good point. Maybe it's not as exploitable as I initially thought it I like
it. It's not a complex object here. So I think it's a good question of could turn out to be on exploitable because obviously when you have like a really complicated object, there's a ton of places that you can abuse. Once you kind of are able to control that object, but I don't know enough to say how this is getting used because it seems like it's just a simple string so you'd have to look at More that string itself is being used or perhaps that shrink is being modified somewhere. And then the whatever I gets reallocated as could be what you target, which maybe if you're able to control that value at some point. I just don't know where you'd be able to modify. Uid. Excuse really that's going to come up early and stay well static so I don't see that as being a big possibility. See, I really comes down to where you IDs being
used. Yeah, it's a fair point. It might not be as exploitable as I originally thought one Avenue you might be able to take as if like there's a sterling call on it though. I think that's unlikely because they do have that you ideal Enfield to so they might not ever do sterlin on that string. So you have to look more of the subsystem, I guess to see how exploitable it is. And that's a fair point. Like maybe the reason they didn't go into further detail is maybe it isn't exploitable. Although actually
other wouldn't CDI mostly be buying bugs. Yes.
Yes, and actually I was just thinking they have these zi advisory Link in the actual blog post. Yeah, I was cdss score there as well.
So looking at that
actually. Yeah, it does say privilege escalation. Oh, there's multiple see bees in there as well. So
well, so two of them they said were lesser. If she it was also what did they say here?
Oh, they actually gave it a nine point three. They said successful exploitation may lead to code execution on yes. So yeah it is it is possible according to the CDE. So
yeah, so the CV though, they don't necessarily actually prove that it's just okay. This is the type of issue that can so we're going to write it as
I mean obviously may allow denial service condition. Which
but I think we'll see he is a CDA he's are often accurate with how impactful they are. At least from like the
past well said power Thursday Dia advisories like a separate zi D zi - thing that if we could have expected for this issue.
Yeah, that's what I was looking at it. So I was
like I was looking at the VMware one. I do see the other one. Here. I see is seven point eight four. ortho on the VMware reporting
Oh, yes, the CDE has a different sea of a cdss than the zi things. That's kind of interesting. But they do say attacker can leverage this vulnerability to escalate Privileges and execute code in the context of the hypervisor. So
yeah, like my understanding of what zi would be doing is they would generally be buying it if it were an actual issue like if it were exploitable. Like at least two doing that whether like you would need to provide a aslr bypass or anything like that. But you would at least my understanding was you would need to at least provide like he'd now showing it to you know enough to do a proof of concept. Yeah up here are IP
Control? Yeah, something like that.
So I'd argue that there probably is some way to music. Oh, yeah. Good question smile on the get along longer discussion ahead of expected.
Yeah. Yeah, we'll leave that leave that as an exercise. So anybody wants to try to exploit that
yeah. If you want answer that question, you know we can't do it but are we can't answer for you but just spend all the time you want on it all those hours and let us know.
Yeah, so we'll move on so unlike this article our next one does focus on exploiting a known bug and this this next post focuses on exploiting the SMB V3 vulnerability that was patched by Microsoft last month the name vulnerability SMB ghost. So there were some pocs out the for this bug before But all of them were they were basically just denial-of-service pocs. Nothing forgetting code execution though. The advisory said the code execution was possible because they labeled it as an RC e so this post explores how they actually exploited it to get code execution and not just a Das. So first the detail the specifics of the bog or rather I should say bugs because there's multiple cascading integer overflows in this code. So for those who are watching we can bring up The code on the screen here and you can cause overflows on two different fields in here that can cause like propagate issues further down the line. One of them is on the offset field and the other one is on the original press segment size field and after these guys played with different combinations of overflows for these two Fields. They found the most promising combination was to use a reasonable offset value, but a very large original compressed segment size value and what would happen here is due to the integer. With the addition statement the allocation when it up being
smaller than yeah so Fields, I do want to interrupt you really quickly and just kind of mention what's going on with this. So you get the SMB packet that comes in and it contains the raw data and obviously it's most that most particles do it includes a header that header your actual data the raw data may be compressed. It may not be or maybe it is always compressed because this sorry this is Function in particular is dealing with stuff as always compressed. It's decompressing the data. So it'll take that raw data. It will allocate a new buffer and then it will take the compressed data. It'll decompress it into that buffer and then it will prepend the raw data any data that was in there. That wasn't compressed. So that's where the header offset is basically saying like all the data before this offset. That's just it's basically it's providing the offset into the data block to where the compressed data starts. So all the information before that offset just gets men copied right into the buffer to the start of the buffer and the compressed that is the compressed into the new buffer starting at that offset. So just to kind of explain something valleys there like the offset is how far into the data block the data that is compressed. That is the original compressed data size value or field is of course, the original compressed data size are segment size, which I think is self-explanatory. And obviously you can control both of those in the header. So deciding all the proof of Concepts just gave it, you know, 0x F of f so minus 1, which of course are like the biggest Value possible or minus one if it's signed effectively overflowing the integer causing a crash when I would try and access this memory eventually or try and write to memory there sir. You can go on though with your explanation.
Yeah. So like you were saying the POC is a were out there were just a DOT because they used a very large value. We would write way out of bounds of the buffer, but they ended up, you know controlling it more because you do have quite a degree of control over that value and They found that they could actually smash the header that you were talking about of the packet on the Heap. And so they were originally going to try to trigger a use after free because there's a piece of code in the snippet that was on screen that would trigger a free on a certain condition and they were going to try to force that condition with memory corruption, but when they tried it they accidentally ended up finding a far better route to exploit this and they basically found that the decompressed function that gets called updates the final. Sighs value to the compressed buffer size. So the mem copy down below is hit and then they can actually control what buffer is used for that mem copy for the sore for the destination so they can basically get an arbitrary right through the mem copy. So, you know
the answer so I do want to add in a little bit that they're a little bit I think you kind of skipped over. Okay, so when they have the user buffer, obviously there's that Target buffer that's been allocated. They kind of control the ultimate size of that as we've mentioned because they control you know, the size of the data. That's when we dealing so they know what the size is going to be. So what they're able to do is they control that in such a way that the It can be allocated in several different locations and they talked about this a little bit as I believe there is a pool forward and there's a look aside list for some it all comes down to the different sizes larger. So large applicate our sorry large allocations will just fail anything larger than 16 megabytes medium allocations larger than 1 Megabyte less than 16. We'll use this buffer pool and smaller ones less than Meg will use the look aside list the look aside list that they Implement here includes Med information in line. With the Heap data, so they show it on screen here you end up having the user buffer that gets returned every turns this object that includes a pointer to the actual user buffer and following that buffer at the end of the buffer is an allocation header structure that includes that is kind of the object that gets returned. So Avery turns that allocation header and that includes a pointer over to the user buffer rather than just returning that user buffer, so it's not Not it's not working quite like Matlock. If you're just expecting to return a block of memory returns an object that Sorry, I was distracted by the comment and chat, which I definitely want to deal with after I keep talking about this. But anyway, so the allocation header struck is where they get if I come back up here is where they get the user buffer. Address for so you control the decompressive data or sorry you control the raw data in the packet that's going to be copied to that location. And because the Overflow that you can get with the decompression, When it's decompressing the data you control what that pointer is pointing to. So you control the data that's going to write and you control because of overflowing that metadata in the heat in the heat that they create you control the address. So I just wanted to cover that little detail. That's their overriding an address in kind of like the Heap metadata not in the protocol header.
Yeah, so, you know when you have arbitrary right in the kernel, it's basically game over that's a very like it's stupidly powerful primitive and they ended up using it. Trick published in 2012, which is basically leaking the address of the current process token with Antiquated system information. And then you just write your own token in there to privilege escalate. So I know we do want to address the common from chat with but they only got LP e naught RC E. I think that that might be like a Miss naming on their part because I this should be able to be rce
aboard so this the DHCP server that's been used here. This is the VMS. Yes. DHCP server. It's not not necessarily hitting out on the network with this is it's what all the VMS are going to be using. So it's a Guess The Host Escape. It's what the VMS are able to communicate with to get like their internal IP address and like kind of on the internal VM to network. So I think that's why it's being listed more as a privilege escalation rather than our seeks this isn't something that you're hitting just somebody I looked on the random network is at least not necessarily you probably can't expose it to do so, but I don't imagine that's the default and thus they listed at it more as LP rather than
RC. The VM like the DHCP or you are you completing with the last two because I don't see any I didn't see any mention of like DHCP or anything in
the so sorry. I thought he was talking about I thought he was making that responds to our zi D i1 not to this bug.
No, I think it's really nice. But because our originally say at the top that you know, the advisory said that this was an RC e or it could be RC eat, but their title they have exploiting it. For local privilege escalation. So now I think there was some to think
he's in there. I'm sorry. I thought that was still a follow up with the with the last topic. See ya
actually. I think it's just like a Miss naming on their part. I think they was like like I was saying like this issue should absolutely be able to be hit remotely. So I don't I think it should be a remote privilege escalation. Not a local
pretty amazing. They're only making it or maybe they're only mention as local privilege because they are. While I was going to say because they're just doing this LPE trick. Yeah, I'm not sure actually that's a good point.
You know what it may actually be because of the route they took so the arbitrary right getting to that point is probably hittable remotely. But I think the trick they use that I mentioned with getting the current process token with NT query system information. That's something that you need local access to be able to do so, I think well,
but you're getting get from the exploit your our well, no, I guess it's yeah, I see what you mean. So you wouldn't be able to arbitrary right in kernel to yeah, it's not a user. It's I was initially thinking okay, so our C and then they use that escalate but yeah,
yeah, so I think that's what it is. It could be hit remotely but they hit it locally because of the trick they use but arbitrary Colonel right is honestly such powerful primitive that there is no doubt a way that you could do it remotely. It's probably just a bit harder, but it's probably it's probably doable. So yeah, they're specific exploit was LPE. It could be turned into an RC. I think so. Yeah, I think that's what's happening there. That being said, I don't think I have much more to add on that. I think we can move on to exploiting parts are differentials and get lab so our next issues directly and get lab and it's from their file upload functionality it seems and it's kind of a cool attack because it's similar to the HTTP desync attacks. We've covered on the past because the issue is a discrepancy between how to of their servers get lab work horse and get Rails interpret HTTP requests.
Yeah, I mean both definitely kind of follow from that same basic idea is a mentions there a actually what it different parser differentials where one process something differently than the other it's like it follows from the same thing the attacks pretty different but conceptually like it comes from that same sort of all the desync between the same for the differentials. Yeah.
Yeah, so what it seems is when you upload a file at first goes through the Workhorse which has roots to find four different HGTV
Magazine. Of course, is there reverse
proxy? Yeah, so when this request is received the Workhorse upload the file onto the disk and tells get lab rails, which is their back end where the file is placed and this is all managed internally. So it's not like a user can Define any path of where that file is written to on the disk, you know, it's managed by Get lab work horse. So the work horse Takes that request upload the file and then it rewrites the request and passes it off to get lab rails with the proper path. Now. Normally it wouldn't be possible to control any of the put request from Workhorse to rails as that's all handled internally, but there's an issue due to the method override functionality.
Well, so I think should also mention here is that we're course is as it's a reverse proxy. Oxy it handles some requests. Some of them get do get Modified by Workhorse of this example of all the pull request to the API packages / Conan and whatever path following that all of those end up getting dealt with are getting Modified by Workhorse and then kind of turn into another request that hits the main rails application or the main rails API, but if you unmatched routes and up just being Passed on unmodified two rails. Yeah, that's kind of an important detail here where the desync happens is Workhorse has no knowledge of this method override whereas rails by default has this rack method override middleware that looks for underscore method as a parameter or X HTTP method override as a header, and we've talked about issues with method overrides before. Yep. We have it's definitely it's A common method for I mentioned with dealing with fire web application firewalls that might not be aware of it, you know kind of a similar deal here with the reverse proxy. It's not aware of that. So it seeing it seeing you know, probably a post request or something like that coming through like okay. I don't know how to handle this post request. So I'm not going to worry about it whereas as web servers like oh, hey, I got a pull request because it's get it gets overwritten so you can kind of Maybe start to see where this is going. We're now if you're able to override that request sending some that doesn't get Modified by Workhorse but is still treated as a put a request in the rails application. You might be able to get to start working with another file. Yeah any file. Well, not quite any file. So actually the rails application doesn't completely trust. Just what it gets it actually has a wife list of acceptable paths and it also had slash temp to that but they have a whitelist. So even with that they still kind of protect the endpoint a bit. They don't just trust Workhorse to give it exactly what it needs. Nonetheless. Basically you can still get it to work with basically any file that exists in slash temp or any other acceptable directory. So you would have to know some of those file names you can make some guesses. Is about what might end up in temp? But you would have to know that actually abuse this issue. But it's I mean that's kind of the gist of it is you're able to make that direct request because it passes right through
the they say if it was possible to do path traversal on that.
Well, that's what I was saying. They validate the path. Yeah, I like in reason the rails application. They validate what the path is. So if you escape out of that path. It's not going to it's no longer going to match the expected like allowed
paths. Okay, fair enough
like basically, so I'm I guess they use followed on real path and I'm not sure of
if that will resolve like exactly how that results. So I'm going to imagine that that should resolve any relative links or anything like
Yes, it Returns the absolute path name. So that's basically exactly how you should be validating file paths. Don't bother just checking. You know, we're basically resolve it to its absolute file name and then just check the start of have make sure the path matches what you expect it to be. Don't try and do what don't look for like dot dot slash or dots or try and remove that don't do anything fancy and weird. Just get the absolute path and see Matches where you want it to be within? So anyway, yeah, so they do that. So basically they're checking that real path. So they get the absolute path and then they compared it with the upload paths. They allow and they also and the slash temp if it's not in there. It gets rejected. Yeah, so you can you can't get out of it. You need to it's only files with in Tampa or within the allowed
path. Okay fair enough, so they obviously fix the issue and how they fixed it as what they do now is they sign the request from Workhorse to rails. So even if you try to do these kinds of sneaky requests using the method override since you can't sign that request it won't pass the validation and it won't be respected and you know, it just won't run.
Yeah, I mean that feels like a bad solution to me. It's a definitely working solution. Like there's no immediate way around that but in order to implement that because obviously some requested do need to pass through I'm assuming that not every the reason they even have that Pastor to begin with is some requesters don't really need to be modified by the Workhorse. Like they just need to be passed through so it's not going to sign those requests presumably. It's only going to sign the ones that work horse goes ahead and actually modify. Its or that things that actually need a pastor War Horse which means on the rail side or somewhere there needs to be either a white list of routes that can be processed without a signature or all the functions need to check that signature like there needs to be some way for its know what methods are OK or not. OK
assume you're saying it's possible. They could miss
one. Yeah. Well, that's the thing when you chew. That's exactly what I'm thinking is eventually somebody's just going to mistakenly Miss some endpoint or something if they've implemented how and think maybe they've thought of something else that just then come to mind when I was reading that but my initial thought was that there need to be some list that gets maintained here now, perhaps they could sign it with Like in the signature like they can include some meta information that then allows it to know kind of the URL that it thinks Workhorse signed. So then that could be done at a universal layer where it's just looking like is the signature like does the signature match that URL like signing the URL basically, so then if it ends up being a different type of request that a little get caught there so I could see some ways that's being done. It just feels like a weird solution to me. Yeah, that's it. I guess I could probably go find out what it's doing. I didn't think about this what guess Lab at least some of it is open source. So, yep.
Yeah, I think they even linked to like a GitHub page or something that had some of their code on it.
So really I think so. I'm saying really on the fact that gift lab link to GitHub.
Oh, sorry. Okay. No, I think they link to like a source page on gitlab. Yeah, I shouldn't have said GitHub.
Yeah fair enough that but that's what I was saying really to like they expose that over on GitHub or get lab. Yeah. It's probably somewhere I'm gonna guess get lab.
Yeah, I don't have to get loud be linking to get Hub baby pretty
stupid. Yeah, actually all these routes and stuff are clickable. So yeah, I'm going to assume and I could dig into it. Yeah, I think probably not going to do that here on stream though, but Yeah, it's damn. It just feels like it just felt like a weird issue or sorry a weird solution to the issue of it solution for said it's better than trying to just fix that differential alone. Like because that just is like this big cat and mouse games are cat and mouse game where oh you find this differential you fix that. Oh, well, there's this other thing that's different and there's this other thing and you just you keep adding on things just keep Keeping found that are different
across them. so kind of want to show it off in the Stream because it's a unique bug and it's also when a system that a lot of people use for projects like this in small by any
means and I mean the whole underscore method is definitely like it's a common enough thing that it's always worth being aware fed and seeing it being used in this way is yeah it's definitely an interesting attack so our next article
is one about getting access to the camera on iOS and Mac OS devices without getting permission from the user through a web RTC bug so I'll let Z cover this one as I didn't get a chance to look
into so much last webrtc and more this is Safari in particular and dealing with how Safari tracks permissions so one thing of note this is actually for what the bug is it is a very long Long and drawn-out explanation. It is the complete thought process of how we went from noticing one thing to noticing this fairly. I want to say convoluted attack method. It's a fair read like I'm definitely not hating on it for that, but it's definitely a long read. But you know as an issue like it is interesti-- like why he looked at this rather than like if this just came out as well you do these things. It would be surprising anybody ever discovered it but kind of explaining how he got there and through the entire process. It's it was still interesting. It's not like my favorite should read I did find it a bit dry and stuff. But that's also just probably partly because of the more like Javis craft and application layer versus the binary stuff, which I prefer.
I'm really a low-level issue.
That's a lot of work for a
Sorry that camera access with these 15,000 easy steps
only nine steps and one of them is profit according to this but that's not my point is it is a really complicated thing and that's why I feel like it's actually it was worth bringing up here. Like again, I'll say I apologize for not being able to really give that do justice to the summary. but
as of right off things where it's better to read
yeah, it gets he walks through how we got to every how he went through every part of the process how every step came because if we just talked about that like that just like
you know, how do you realize that
you know this right of goes through how do you realize that gain complicated very roundabout like it's not something that you're just going to
day decide to try like what if I do this and this and this but yeah, how know it's a good write-up for that likes it. I did feel it was a little bit on the drier side. But I mean if you're actually interested, it's yeah, it's worth the
read it's got a lot of there for you if you're interested in it. Yeah, well turns out to our sorry, we're gonna have
something well just by saying terms of one that doesn't have a lot there for you. We've got a hacker one report from James Hancock. This is impacting slack. I believe it was yes lock relative path vulnerability results are results and arbitrary command execution / privilege escalation. $750 Bounty for this it's not on the Like slack client this uses nebula, which is kind of this networking tool that slack has a use between quotes crate like this mutually event K peer-to-peer software defined Network. It's the language out of their GitHub repo for it. Basically created to provide like a mechanism for various groups of host communicate securely across the internet. so nebula in particular just of this issue is really simple. It uses a relative path. When does executable command? It just uses ifconfig or router on Windows uses netsh. It doesn't give the full path or the absolute path like essman ifconfig. So what that means is if you can control the binaries in the path before it finds the original one you're able to get execute. as a privileged user whatever binary you want it to run. So essentially when you call just for ifconfig, for example your systems going to lock a well as iif, it's quite look at your path environment variable. It's going to get all the folders elf that's going to go in order first folder in there is ifconfig in here. No, OK second folder and there's I've config in there. No, OK third folder. Okay. Here's ifconfig. Let's execute that. So an attacker who can place a binary of the same name somewhere in that path before Actual binaries Falcon get to be run. Straightforward issue just use the absolute paths when you're going to execute something don't assume don't assume the setup. It does require obviously somebody that can that's in a position to place a binary but
Because my path as well, yeah,
well or place the binary within the path you could modify path. That's the easiest way to deal with just you know, put temp in the path or something and then X here and attempt, but it just needs to be in the past somewhere. So it's like the user doesn't need to be able to do that. They just need to be able to get a file though odds are if you're able to get a file wherever you want with whatever name you want. You probably can also mess at least somewhat with the environment variables. Yeah, although that actually ends up being a point of issue that they have chat with reproducing that. So you can go ahead and just read that they had a little or the triage team had a little bit of issue actually proving. This was an issue just using pseudo because pseudo would use its own path information. It wouldn't what a huge the executing users but The difference being here that it's executing already as privilege and interest executing a Tom either way. You can read about that discussion. I didn't think it was too interesting. I just thought was worth breaking off. Yeah. This is definitely one of those combinations and it's really easy not to make use an absolute path. It's a little bit less portable. But it's better if basically happens on a lot of application and
straightforward. Yeah, I mean I was a bit surprised considering like the impact is limited because you do need that level of access to pull it off this gets this question because nebula runs as root but I was a bit surprised at the Bounty was as high as it was sudden heard $50, obviously that's cool for the researcher, but it's a bit higher than I was expecting. I guess.
I don't know. I mean that seems pretty fair for privilege escalation. I mean you are getting rude out effect on all the client on all the slacks clients that are actually using
Nebula. Yeah, that being said the severity was taken down. I think initially it was high but before disclosure, they knocked down a bit because of the level of access required to pull it off. So we'll get into some research. It's been a few episodes since we've had a paper. So we're going to cover paper the talks about our favorite types of papers, which is an adversarial attacks against lidar order light detection and ranging detectors for self-driving Vehicles. We love we love self-driving vehicles on day. Zero,
so yeah. Also, I like talking about these issues that can really happen. Basically. Yeah,
there's definitely more of a real-world type of talk
and just to be fair we have had we did have a paper. I want to say like two episodes ago maybe but hasn't been that long since we have haven't had paper but anyway with this one it's they called a physically realizable adversarial examples for lighter object detection it is not they don't actually realize it physically this they just they do all this digitally to see how it would detect something they don't actually pull this off which is unfortunate but I do like talking about the ones that could be done in the real world it's not it doesn't require access to the Imaging system
as yes every wire access to the can bus or anything like that
well I just meant like yeah sometimes the attacks will assume when they're working against the camera in particular you know those whom they can just modify all the pictures Clean like make minor Pixel Perfect modifications. This doesn't this assumes you could put effectively that you could put something on your roof and it will no longer detect you as a car. Yeah, which is the gist of this thing what they noticed was that a lot of training data doesn't include objects on roofs cars, which you know absolutely happens in the real world. Just not so much like it's a little bit uncommon so I can understand that uncommon assuming translate training data where it's just not really there. That's what they do is they digitally manipulate what the point Cloud what if look like and see how it gets detected and I will jump pretty much right to the results unless you have something to say about it Specter.
I'd wanted to say this this like idea of investigating this route of attacking object detection doesn't seem to be new they do point out to some previous work that was done on it though. They point out that a lot of that previous work was theoretical the attack was using adversarial Point clouds like you were talking about but that match was only being considered for like a few specific frames not like universally for like any scene and the data set was also like very small. So what they're doing isn't entirely like new it has been looked into before but it seems to be like more universally applicable than some of the other papers that have covered it. But yeah, you know,
yeah, I have to imagine like a lot Of these systems are like they don't have the model that Tesla uses. For example, they are basing that's just off of a more generically trained model and so it is worth pointing out like just because they find these issues doesn't mean it's like, oh, well the next test like you buys necessarily going to do the same thing. This is yeah worth worth mentioning that but getting just right down to it. Basically they just mocked up different objects that could be on your roof and well they started by just mocking him up to see what a would get classified as and it actually did a reasonable job at Detecting the vehicle despite the fact that there is also a couch on the roof or a canoe on the roof, which I you could expect reasonably that
I don't know. What a couch but it's a canoe. Yeah.
Well, I was quite sorry. I was going to say you could expect reasonably that it should be able to handle nothing you could necessarily expect to see it just that it should be able to kind of handle that so then with a little bit of adversarial changes on it's still things that you could do. Do but it looks like you know, the canoe was a little bit malformed or whatever. They found a much greater degree of success like with their adversarial couch. The car would be detected as a couch 68% of the time. So there you are. Just just going down the freeway in your cabinet 54.1% of the time. So yeah, I mean it's it's a straightforward attack here. It's similar to other things we talked about. I just love the idea of you know your self driving car thinking the thing next to you was a cabinet and not a car.
I think it's kind of neat because a lot of the adversarial attacks we cover are done intentionally, but this is kind of showing you could do it intentionally, but it's also possible that just very benign things that people do like putting a canoe on top of their car, which I see. All the time in the summer could end up fooling these systems something, you know, it's not necessarily intended to fool those detectors, but it could end up
doing yeah, it could obviously their results that were last like the canoe in particular was 26.9% but it's really clear but the adversarial one was definitely a lot higher. Yeah, it's still there.
Yeah, and the adversarial ones. They're kind of like they are may be possible. It probably wouldn't physically Work out if you try to do it and you'd probably get pulled over for an unsafe low, but some of these images they have / but you know, it just it's kind of interesting because like you said a lot of the ones we've covered if dealt with like doody modifications, like direct pixel modifications or putting something over stop signs or something like that. It doesn't really deal with I do like the refund ones
actually boiling like I
mean thing this one's different from those.
Yeah. No, well. I just like the fact that it is some that could in theory. I have to say it. Theory be done for real in theory be done in practice. It is different from a lot of those attacks. So like I'm not a big fan of those attacks that do require direct access to the data itself being processed. So that's kind of where we're I think this one's kind of different. Yes in this paper. They do directly manipulate that data. I just think they do it in a fair way, you know, they're trying to simulate when you could really do so, I think that's fair like said, you might get pulled over for something unsafe loads, but at the same time You know, you're probably not doing this and just driving down the freeway for hours. It's probably if you were willing to use this and I mean, I don't know if any actual campaign or anything that would have done something like this. Like I guess it wouldn't be a campaign. But like I've never heard about any sort of attack like this being done in the wild like and I just mean in general against the AI self-driving cars if this style that isn't to say in the future we won't see up at this point I don't know if that actually happening
she's got a regiment like a full like team of people who just drive around with cars with tilted couches on the top trying to cause accidents on the freeway yep there you go nude new malware campaign targeting targeting cars but yeah no it's definitely it's more of a fun paper it does have some potential real world implications but it's more just you know fun and concept the actual attack like the math behind it and stuff isn't too interesting or two important I don't think I think it's more just what you were Saying with you know, the potential of like of real-world objects causing that yeah on anti sold out of its don't
like you said. It's a fun attacked fun issue. Yeah,
so we'll segue into some shoutouts. We have a shout out for another Microsoft breakdown and it lays out an attack Matrix for hitting kubernetes containers systems.
Yeah, and I mean, we're not going to cover it. They do like the matriot acts tile breakdown of Of it. It's just a good reference
for the table of like a bunch of different attacker goals that you would go for. So yeah.
Well, so that's the attack Matrix the mystery attack Matrix.
Yeah. It's kind of your goals and
different attacks there. Yeah, it's just we talked about another breakdown. I think when we were talking about CFI, or was it the Rock one? Now am I supposed to
know it was episode 32 and we were talking about memory tagging.
Yeah, that's what it was. Well, yeah, and they just kind of broke down kind of some of the attacks are so I saw this and I was there it's another little break down. I'll say this one is specifically a breakdown. Where's the other one? The breakdown was kind of Just the tangent to the actual topic of the memory tagging.
Yeah, the other was more of an
analysis. Yeah, so I don't know I saw this though and figure some of you might find it. I find it somewhat interesting to give a read over or to use it as Biff a reference on a
pen test. Yeah. Yeah, it's I like Microsoft's breakdowns. I like how they have like the general overview that's like really easy to understand up front and then they have like those breakdowns further on if you want more information on any specific issue. I like that like style of breakdown Microsoft does a good job with those.
So yeah, I think they did a good job here. They don't go like too far in in detail with anything. It is a very much an overview of different attack methods different things you can be looking for not. Not super specific most of the time
not a guide.
Yeah, but yeah, I agree. I like it. That's it's a
shout-out. You also had another shutout with project zero. Yeah project zero post.
Yeah. I mean we often end up talking about them. This one I didn't feel is quite worth actually discussing on the episode. But another post from Maddie Stone who we've talked about her work on another I want to say we talked about with WhatsApp exploits but um, yeah, she put out another post they're worth of rate. It's an older vulnerability. That's why we're not going to cover in depth, but if you don't already follow project zero, there are zero length. They have a new post
out. Yeah. There you go. I have a shadow. I saw this being mentioned. There was a Frack paper for the first time in a little while. I think the last Frack paper was in February, maybe
yeah. Yes, I mean, so Paper part of their paper feed. Yeah, although I didn't notice actually when I went to the homepage a Frac to go pull this off, but they do have a call for papers. That's as far as I know is somewhat recent and I don't think fracture dropped an actual Zine since 16. I want to say was the
last one. Yeah, it's been a while.
So yeah, there are looking for papers. I'm not sure how long ago they put up the cfp or if maybe I maybe I should have looked at that
butt. We'll go to the Wayback machine get it get a time set. But um, yeah this Frac paper specifically it talks about bugs that this researcher found in the FreeBSD beehive hypervisor, which you know, I don't think I've seen much about I don't think it's like very popular but you know anything with hypervisors and stuff is always fun to look at because of how low level it is. And one of the bugs is in the VGA emulation subsystem, and it's an out-of-bounds access. Pallet memory via integer overflow and that memory is used for like pixel entries for DACA dress reading right mode registers. And the problem is they end up using 32-bit indexes when they should have been 8-bit indexes because only values of to hunt to 256 expected and the wraparound is expected on that too. But because they use integers instead of chars or un dates you could actually end up reading or writing out of bounds and because you get those two, To both read and write those two Primitives together very powerful because you can use that to both leak and corrupt Heap data the other bug seem to be unrelated. That was a bug in the firmware configuration device, which allows the guests to retrieve host provided information like the virtual CPU count for example, and it's kind of similar to the first issue its assigned in this issue and one of her Quest structures. So something that should have been unsigned is treated as signed and because of that there's a discrepancy between the allocation a The copy length and you end up getting an overflow this article goes through all the technical details of both vulnerabilities and how the first one can get code execution the second one. They don't go as far as code execution. They get arbitrary read right, but you know, like I was saying earlier arbitrary regretted super-powerful you can get code execution from that if you take it further. Yeah
one thing I did find I wanted to give it a one thing. I did find interesting with this one is they do also talk about inning to bypass or deal with CFI and doing raw pin that are well. Obviously CFI isn't blocking a raw, but they do talk about see if I as you know in an ending up pushing them towards thing to do wrong. I was just I was just interesting little reminder there that yeah, see if I see coming that said Yeah, there are ways around that we've had a little bit I won't say Doom and Gloom. I think we've because we've talked about by passing it a few times but you know, it's just it'll be like a SLR and dap it all stopped some exploits, but naturally or will find ways around it. But yeah, I thought it was just interesting real world and looking at getting around safe stack getting around CFI.
Yeah, so, you know really want to show this out just because when you're talking about like exploiting at the hypervisor level resources are a lot more scarce for that compared to other, you know, like web exploitation or just you know, higher level binary exploitation. Oh and and the Frac papers are generally like pretty good like they're vetting processes. Awesome. So
yeah, and I will Lampkin Rock are worth reading it will mention on that. Actually that cfp 70 has actually been up since like 2016. So definitely not
new. Oh, okay fair enough and then zi you had another show note with you want to be a web security
researcher? Yeah, and this is an older thing. Actually apparently it was updated just last month, but it's from like 2018. I just first came across every SLI, and I know we have some people who do like the bug Bounties and more web stuff and I thought I since I just came across as I thought it was a really good write-up of going about actually getting into the research not just repeating what everybody else is done, but getting into actually doing some new research and I think it actually applies more generally. It's while a talk specifically about web stuff. Yeah, it has advice about like if you come across a good quality blog post read the entire archive well, maybe help me find some forgotten tidbits and using that older information to build off of that find yeah it actually makes your hunt for forgotten knowledge no idea is to supid things like that it's a good write-up even if you're not not necessarily going to do web stuff I think it's still relevant or you could still apply a lot of this to just security research in general yeah it's older but since I just came across it I figured I'd give it a mention that's for sure well
so that pretty much you know includes all of our topics will wrap up the show here thanks to everyone who tuned in it was cool to get ante on for like that you know 30 minutes that he was honorably it
again yeah it's been it's