Episode 64 - Industrial Control Fails and a Package disguised in your own supply
< Back to post
This transcript is automatically generated, there will be mistakes
Hello everyone. Welcome to another episode of the day Zero podcast. I'm Specter with me a zi in this episode. We have some industrial control system fails some kernel bugs and a new class of supply chain attacks were kind of new class. I guess we'll get to that when we get to that topic. I will quickly show do before we get started that we have a discussion video coming this week where we talk about the future exploit development and where we think it's going to go so keep a lookout for that also next week for the podcast. There will be a Bod going out but it will not be streamed live. That's just going to be that week. It's like a special circumstance. So it's not like we're not going to be doing podcasts live anymore. It's just for next week, but I just wanted to quickly mention that off the top with that said though, we can jump into some news topics. So we'll open up with one of the bigger stories. Probably the biggest if you're purely going off of coverage and mainstream media, which was the the Florida Water treatment facility hack. So this is one of two or maybe three. I can't even remember when might have three industrial control system day's topics that we have today. So yeah, someone managed to breach into an internal ICS panel through TeamViewer, which is just wow amazing. Um, and they then proceeded to increase the amount of sodium hydroxide or lye being distributed in the water supply which would have effectively poison the water supply. It's usually pumped in there to control the acidity of the water, but it's a basic solution and it's a very strong base so adding more than necessary starts to increase the pH of that water. So yeah, this seemed to be a failure on multiple levels. Luckily. It did have a happy ending the Plant employee was Vigilant they sounded the alarm they're able to get everything reversed pretty quickly, but
I'm not sure if I'd necessarily say it's a failure on multiple levels. Oh, it does sound like the plant physically would not have been possible or would not have been physically possible to do this. The setting was there was taking time for it to happen even had nobody notice it but of course, we already know that one of the people on the planet did notice the change in the McCall level and was warned about that. It sounds like things kind of work how they should obviously the TeamViewer aspect is a failure. I am not going to do against that but I in terms of their having been a breach they detect that they responded to it. They it all dad. I don't really think it's as much of an issue on that level. I just think the failures kind of the TeamViewer which I guess can also come down to the fact that a lot of these small. Water plants and I don't know how big this one is. Actually I should have looked into that. What was this is what city was this? Nonetheless? I mean, I'm just going to kind of give him the benefit of the doubt. So
I did see them say like they were saying that even if nobody noticed that there would have been like a systems in place to prevent the water from actually being, you know poisoned effectively, which I do kind of question because a lot of other sources that I saw that covered this story did not like mention that so I didn't know if it was like damage control or something. It's totally possible that they do have those automated systems in place, but it's funny that the only place that said that was I think it was the mayor that said that but yeah, like I didn't see that being relayed like anywhere. So I don't know if that was just a fact that got mister or what but um,
well, yeah, I think it's pretty easy for people to look at this one and go for the obvious aspect which is a water treatment facility was compromised and a very real attempt was made to harm or kill people. I mean it like that is the no were thing here. The side effect of the fact that the plan may or may not have been able to actually do it. It's not as newsworthy as just talking about the fact that there was that attempt to do it, even if it was impossible. Yeah, I mean it is possible that that is something that like they've just said to kind of downplay the incident. But it does seem fairly reasonable to me that things like this are being monitored for the chemical levels already. You would expect that to be there. So I would more or less expect. The water treatment plants to already be watching for Dangerous levels of anything to be
present. Yeah, I mean there's definitely tight regulations when it comes to water plants. There was actually somebody in Discord. This is like kind of a throwaway thing. I am not like totally sure if that's actually the case but apparently like if you work at a water treatment plant and you intentionally do something malicious it's like 20 years or something in prison minimum or something like that, which I mean I'm it makes sense. You're talking about a water supply that's going to be used by millions of people. Yeah, I mean these these treatment facilities are usually taken pretty seriously which is why it's such a big deal that it's been hit as far as I'm aware. This is the first publicly known attack on a civilian facility and on us soil and and it's kind of it's one of those things where people have been like worried about this kind of attack for a long time. You know, I mean stuxnet was kind of that wake-up call that industrial Control Systems could totally be attacked even though there in the real world. They can be attacked from cyberspace. But that was yeah, that was a nuclear plant in Iran. But this is this is the first attack I can pick up on something like a water treatment facility. Maybe it's not the first attack that's ever happened on one or attempted attack. But it's it's the first that it's really seen major coverage and has happened in the u.s. At least. So yeah, I mean this had this had a lot of the big media machines buzzing and For good reason so I figured we'd bring it up even though it's the story itself isn't too interesting. It was mainly just a TeamViewer instance. It was sitting open for some reason that seems really strange to me. I don't know why system that was controlling the chemical levels of water distribution was open apparently to you internet or something 15 you were that's weird.
So it does sound like a lot of these treatment facilities gained in the smaller. Areas, which I'm making an assumption here because I didn't look the size of this area but it does seem like a law firm. Like it's just kind of a one-man operation in terms of like the It stuff so I can understand to my being I need, you know remote access to this want to see the desktop. Oh, I know about TeamViewer and just immediately going to use TeamViewer for that.
Even though it's not something you should be doing. Yeah.
Yeah, I'd like it's not ideal but I could see just like a standalone it guy being like that's the tool they know so that's what they go in use and not really thinking much of it because in theory, you know, TeamViewer should be reasonably secure in practice. That's definitely a questionable assumption. But in theory you shouldn't have that much of an issue. I mean, I think TeamViewer does have fairly weak passwords by default. I'd hope that was changed but I have no idea on that front. The fact that it's internet connectable. I'd be more interested in what was on the cyst like exactly what happened there. If it was that this system then had access to another like had access to an app that was then used to do it. Like I could understand if it's I can understand if it's kind of a step removed why somebody might not be thinking about that
aspect. Yeah. Yeah, and I didn't really see anything in the article that suggested there was a lot of real movement there, but you're right like a totally could be something like that. There's not a ton of information around how the attack actually happened or at least not that I could find. So
no, I mean all they really know. Well, I don't know what they know at this point. But at least that point it was just the water was used to open the application and go from there. Yeah,
so pretty pretty big story. I think there is there might be more stuff to come out of it because they are like there is an active investigation around it. So possible we see more possibly we don't and that's just kept internalized. But yeah, definitely wanted to bring it up so we can move into a more fun topic I guess which would be football or actually We'll save that for the next one will go to beg Bounty first. So that's one talks about beg bounties which are described as people who are hoping for a pay of reporting a bug whether that bug be a real bug or a bogus issue to a company
in the what's happening. We've kind of talked about before I just haven't heard natural term being put on it actually kind of King that term big Bounty.
I kind of like the term.
Yeah. I think it hits the point where people are basically going around with some Very easily discovered vulnerabilities at the example here. They give her at least the first one researcher emails them, you know, hello team. I'm scared you research and I found this vulnerability in your website. The vulnerability is that there's no dmarc record found. Which is not a vulnerability in the website and doesn't really impact the vulnerability in the website or the website itself. It's you know
email. Yeah, so basically the focus here was on like companies that weren't even running a bug Bounty program or at all, so like small businesses kind of thing and that ranged like these types of samples that were posted on the post where they range from hinting at how a bug Bounty would be nice to fall out trying to Extort a payment out of there. I don't I don't really know if I agree with extort entirely and some of the context it's used but I'm being more freshly happens
like the efficient thing. Look I don't know. This is a big pet peeve of mine with security researchers who go and find vulnerabilities in software go find it in applications that don't have a bug Bounty and then go either demand to get paid or make a big fuss on social media when they realize they're not going to get paid. Now I'm all for companies having a bug Bounty. I think that's part of the like maturing and security maturing that the security process and just kind of stepping up your security in general a bug Bounty is part of that but it is kind of a later stage. It's once you've already dealt with all of the low-hanging fruit and all that. You've got you've got your regular assessments being done. Then you handed over to crowdsource on that extra testing in addition to everything else that you
doing So it's just a pet peeve of mine when researchers go and expect to get paid for things because you know, all these other sites are paying for it's like let's not even look at what this company cares for wants you to look at and just do what and demand
payment. Yeah, I mean unfortunately unfortunate reality is a lot of these smaller businesses. They do not have the budget to facilitate those types of programs.
They also yes and they just don't have the maturity in their security team to support it
either. Exactly. Yeah, it's resources that you're basically expecting them to have resources that they don't have in many of these cases. So yeah, this this blog post author decided to share some of the things that he's been sent from smaller businesses trying to reach out as well as samples they received from Cobalt, which is a cybersecurity firm which markets to some of those smaller businesses. They found each method message that was sent to the site owners from Um the person trying to get a payout was sent to an openly available email address on the site, which is probably a given and it contained a combination of three things low-hanging fruit bugs that were found through automated scanning or misconfigurations or not bugs and just like like the the issue that you pointed out earlier zi with the emails copying and pasting of that report into an email template and then asking where to send the information to receive our reward. The compliment that he posted three examples in increasing order of potential validity of like the actual issue. The first one was a vague report about an error vote login and other negligible issues in your sight and asking if you will pay me as an appreciation reward, this was kind of funny because there's like, it's completely useless as a report. It doesn't tell you anything and it's very direct about wanting like a payout so It's like please pay me out for this very vague information that is useful to nobody.
So that is asking for a pay out before you've even given them enough information. Yes, I mean it. Some people are kind of stress for the money like I get the times. But it's just the wrong way to go about it. I mean Securities one of those areas where you need to establish some trust and it's a lot easier if your If you're working as part of a bug bounty to do that to have that trust there, but if you're just approaching as an individual researcher without any Bounty being there. You need to give them some reason to actually trust you in the first place. And that's just kind of lacking with a lot of these in addition to the fact that something vulnerabilities being reported are just extremely trivial issues that they are likely already aware of and have made the decision not to deal with which is kind of another discussion to be had over. You know, what do you report to a bug Bounty the lot of times I've definitely talk on here about some issues. I've reported that maybe wouldn't be bug Bounty worthy. But when I'm working more what I'm being hired by another other company to assess their security, you know, I wanting to give them as much information as possible to make sure they're aware and then they can decide how they go from there. I'm just providing them the information for them to make an informed decision whereas with bug bounties. It's a little bit there isn't as much a frost and
expectation in it. Yeah, I mean there is like if they have an official Bounty set up on like hacker one or whatever. There is the what's in scope page and what they're looking for if they set that up, but that's the problem. Right? If you're not running a bug Bounty and people are just contacting you through email. It's a lot easier for that spam to to come up because there's no Official Guidelines for what you're supposed to even be looking at. So yeah, it's just another thing that makes running a bug Bounty difficult is Is that triaging part of
it? And I guess I should maybe. Clarify that what I'm talking about kind of the reporting of the issues. And asking for money like if you're doing research on some that does have a bug Bounty. I'm not saying you shouldn't report any of the issues. It just you shouldn't expect payment for it. I think that's kind of a given but I want to kind of be explicit about that that you should still report vulnerabilities. It's just don't expect a payment that would affect we've been getting a lot better A lot more companies run these bounties. But I still would advise not expecting yet if there isn't an actual bounty in place. I don't don't invest your time somewhere. That isn't going to be paying paying for it.
Yeah, that's fair. I think where I come down on it. I think I may be a little bit less against it than you are. I think it's okay. If you hint that you'd like a reward like asking for it at the end after you provided information that's actually like useful and not I have an error about your login
page, but that's the other things like providing information. That's actually useful also means kind of Standing the actual impact of a vulnerability. like the dmarc record, I wouldn't really report and expect payment for In fairness, it doesn't look like they ask for payment on that one. They just say they found this issue but I mean when it's when it's somebody who's literally copying and pasting, you know, the vulnerability report L necessaire something or you know a burp reporter, whatever. Like at that level, I don't feel like you should be going around with any problem that doesn't have a bounty because the issues that you're likely reporting are not actually impactful. They could be false positives. Or things that they're already aware of. I don't know it just if you're an actual research or doing it, it makes a little bit more sentence, but with they are just copying and pasting reports from you know, whatever other
software. Generic python vulnerability scanner from GitHub.
Well, that might be in something
unique never know. But yeah, if you want to laugh you can go ahead and click on the link to go to the post for all the examples. I don't think we're going to go through all of them, but there's some funny ones in there too. But basically the author ends off the post with some advice for recipients of these messages saying you just shouldn't engage with them picked. them if they cause of social media stink like you were saying earlier some of them might do I mean whatever, you know,
like that's not really a whatever though that can be a huge damage to a company.
I mean If it's a real issue, then I'd agree. If it's one of these bullshit issues then I don't know if I necessarily agree because if when that information goes out there the right people will see it. Maybe this is a maybe this is too optimistic the take but I think the right people will see it and point out that hey this is like not actually an issue and it'll just kind of fizzle out. I'm I haven't seen that case happen on Twitter before
yeah. I've seen things that don't gain traction on Twitter whether or not it's actually because the issue just wasn't legitimate that you think there's a lot of people who don't know what an issue is like how it actually judge an issue. You don't know security but they recognize the application or something. Oh, no, I mean that's a whole another discussion though. I
suppose. Yeah, I mean it's when you get to that like extortion type aspect it gets a lot more money. But yeah, I think in general when one of those nonsense issues comes out. I think the community is kind of decent when it comes to responding to those and quashing people. Do not murder. Yeah. So let's move into something that has a little bit more drama them off the back of that. We have a story about footfall cam which I'm not going to lie for a couple days when I was reading notes and looking into this story. I always read it as football camp. I saw it. I was like, oh that's an F but not a be so yeah football can which is basically it's it's a person counter. It seems like it's a camera that you put up in like a doorway or something and it counts how many people pass by it is how I understood it and it uses a Raspberry Pi internally and it has some terrible security and it had an even more terrible response.
So part of when you know, why bug bounties have become a thing because this sort of response the response with legal threats and stuff was uncommon now if we go back a decade, so I think that's it's one thing positive changes in the industry is that it has become a lot more acceptable to do this sort of testing. But yeah, they have several like pretty serious vulnerabilities. Mostly just including files like excluding Secrets just bad code lots of issues that aren't entirely relevant here and I will actually include a secondary link because the same company that makes this product also makes a nursery Camp product and the nursery Camp product as you might guess also has plenty of vulnerabilities
in it. Yeah, so the issues are like not really interesting at all. Like you were said they were kind of those Secrets being exposed. There's a bunch of compiled a nun compiled python files on the web server directory behind no authentication. It sets up a network with a default password. Admin password can't be changed like just really like silly
issues and setting up the network. That's it's a broadcasting its own Wi-Fi network with ask for that you can change. yeah, tell people what you might run this inside their corporate now for and you'll be exposing their entire network to you know, anybody can just log in on this device. Yeah. So
yeah, so pretty like low-hanging fruit issues. But where the story gets more interesting as when it comes to that response angle, which was went over soft report of these issues football camp and had reached back out requesting for security consulting which you'd probably think. Well, I thought you said there was a terrible response once over soft it. Supported their rate which was 125 pounds an hour after an appositive 3025 pound with that here are yeah, sorry Euros. So yeah after they'd responded with those rates, they basically like football camp basically, then you sock puppet accounts to accuse that researcher of extortion because of the rates saying that 125 Euro an hour is actually like super and reasonable and that's like however much a year like 325 thousand a year, whatever
250-acre 250,000 a year and yeah, I mean just three clear that's pretty fair game, especially when we're talking about a Contracting right? He mentions your annual salary would be you know, 250 euro 250,000 euros. But as a contractor, usually you charge about 3x what the salary would be. So that's actually a pretty low rate. At least we are comparing with rates within the US
when I was gonna say have pretty high rates of those are all like very reasonable rates for a consultancy like that.
Yeah, actually, it's I'd say it's probably on the lower end. Yeah, nonetheless. They did that followed a fraud Report with the police in response to it. And obviously all these sockpuppet accounts. Oh say no to cyber
extortion. Yeah, so this story prompted the other story that you mentioned earlier cyber Gibbons had which was another researcher notice. They had another product called the nursery cam for allowing camera monitoring of children and nurseries. They had some of the same issues like no encryption. No TLS used this
one. So the nursery cam just seems one of the things that they did is you log into the application and it gives like it's Jason. Bonds containing all of the IEPs and passwords to all of their DVRs for all their Nursery camps. They're running and then your device can then directly log into although so you get access to well the you I might hide some things you could ask select all the nursery cams and although all the passwords are the same anyhow, so it doesn't matter but back they publish the passwords over. Anyhow, it's just this like completely the wrong architecture to be doing any of EFS with
Yeah to see that. This was designed without Security in mind would be an understatement. I think yeah, basically a lot of meme issues here and somebody in chat pointed out Bruno Mars MP3 with the yeah. I mean the the user or the home directory on the Raspberry Pi it contained like things that they assume came from the developers machine that they just uploaded to it in a production environment, which is really weird. I'd like it had the Bruno Mars song MP3. It had a bunch of like unused Python scripts and this is there's a lot of like weirdness and like things you wouldn't expect to see in a production system. Hopefully in
addition to the Bruno Mars thing. They actually so the register tried to get in contact with them and I'll bring that link up also the register tried to get in contact with them. And when that's happening there is they actually mentioned that they contacted the Are they informed the copyright holder about that Bruno Mars MP3. Are you really telling me that they paid the licensing fee sedition with the Bruno Mars song with their product Oh, it's just stupid. I mean, this is clearly somebody who's trying to lie to cover themselves. There's no way that they would have. Contact at the copyright holder and had a license for one because I'm not even sure you get a license to just distribute the MP3. Usually you've got a license like you've got a sink license to you know, Play the song In addition to something else or like for covers. You've got other you've got a ton of different licenses. I don't know what license would really just like you. But what standard license I would be offered by. ERM I forget who has the rights for Bruno Mars, but um So I was actually trying to look to see what the cost of license would be. I couldn't find a license that they were offering that would even match this dish being the MP3 so clearly I don't think that's an accurate statement, but that is what they claimed.
Yeah, but overall just like awesome security all around on an even more awesome response. This is funny because it's the flip side of the last issue. We are talking about like kind of saying if you find an issue and the the company isn't running like a bug Bounty program or any vulnerability research program still submit the issues just don't expect the payout and this is kind of looking at the other side of the fence of like this is a model example of how you should not handle. reports that are sent to you because these were absolutely like Ferry issues to point out and Yeah, I mean you've always said it on the podcast with how you respond is what matters more than anything else and and this is not the way to do that. But yeah with that said we'll move into some of our exploits of the day. We have to info League based issues and telegram which lead to a compromise the privacy and telegram for Mac OS when sending a recorded audio or video message and normal chat. So for anyone is used telegram are for hasn't who hasn't used telegram rather you can send text messages, but it also does have some rudimentary support for like recording audio messages and video messages. Stuff like that. It's not quite as like full-featured as something like this court for example, but it's there. So when you send those recorded audio or video messages the sandbox path gets leaked for where that recording is stored. Now that doesn't happen in secret chats just a normal chats. So that's another thing telegram supports and is this concept of secret chats where it's it's encrypted and you can have like a burn messages set up and stuff like that. It go through those those
chats. No, am I mistaken? I understood this vulnerability to also be impacting the self-destruction messages. So those messages that after like 15 seconds or whatever after you viewed at should be deleted and they'll be removed from the UI. But the file remains that was my understanding of this one. That's where I thought it was a more serious issue because it's just leaking the you are not the URL the file path for an MP for the human scent isn't really an issue in my
opinion. Yeah. So like you said that is the main issue. I just hadn't gotten that far yet. Sorry. Um, yeah, no worries. Basically that sandbox path. It doesn't get leaked than secret chats, but the sandbox path it is the same between normal chats and secret chats. So any of the data that gets received in a secret chat is stored there and like zi said it's not deleted after that self-destruct fires. And for the yeah, so those self-destruct messages their intent is pretty obvious after you've read them or after a certain amount of time. They just delete but they only delete from the UI side on top of that. It also stores the local passcode for unlocking Telegram and plain text as well. So those The two issues the local Pascal passcode being stored in the state as plain text and any files received in secret chats can be retrieved from the file system. Even after the message deleted which I'm curious how they handled this issue because even if now they fix it with deleting the message from the file system, once the messages deleted what's to stop you from just racing the expire timer to copy the file to a secure location before the burn timer expires.
I'm not sure how much of an issue being able to call. See the file would be I Specter was just talking about the race. I'm not sure if that's an issue that telegrams necessarily going to worry about is let's say you're able to do that. You're also able to do screen recordings. Like there's a lot of things that you can do that leak that file. It's not perfect. It's not going to be perfect privacy because somebody has to view it at some point. I feel like at the UI and deleting it would be kind of a sufficient level like anything you can do before they delete is kind of fair game. But I'd like it if they took more steps towards making sure people can't like just copy the file out and I think on mobile applications this is going to be in that data / data folder. And when it's there you need a roux to read that anyhow, so I just get the feeling like if they're deleting it that feels somewhat sufficient, maybe some crypto on it. So, you know, you need the pastor, but it's just adding kind of layers of Layers of Defense on top of it. I guess at that point. It's not really like there's no ultimate protection here. You could always just have a separate camera and take a video of whatever message you get
through. Yeah, so it's kind of interesting because when you get to mobile applications that support similar features like Snapchat, they have some some mitigations in place for that kind of thing. Like it'll let the other party know if you took a screenshot for example, and while you can bypass that if you have like a jailbreak or or a rooted device or whatever it also tries to detect and prevent you from using the app when you have a jailbreak whereas On on desktop, you don't really have that level of like I guess DRM for lack of a better word. Like there's not as many protections against people performing those types of self attacks. I guess where they can run a script that can just pull everything from the secret chats. So you have to have that level of trust that the other party isn't going to do that or that if they are going to do that that you don't really care if they do in which case you probably wouldn't be using a secret chat anyway, but Um,
yeah, I mean I kind of feel like the whole secret chat is a bit of a false false impression of security. Because any sort of determined attacker is going to be able to get that information are going to be able to keep that information. It's just I do believe that deleting. The original file is kind of a minimum. For sure like that that that is a fail. Absolutely how far how far you go in terms of Defending comes down to really kind of more of a personal opinion on that because you can always take steps. Like there are always more layers you can add to make it more and more difficult. I wasn't aware of what Snapchat was doing is you were just mentioning. So I mean, that's something I'm not sure if telegram does that on mobile or not,
but I do believe so
but I mean it just kind of comes down to like there's always more that you can do.
So that is this true. What I will say with telegram though is it is kind of like the laziest one out there telegram is one of those applications where they like to try to pretend they're all about privacy and that that is like basically the entire motivation behind their app and like security but they don't take as many of those steps as most other platforms like signal and what not do which is why I think a lot of people are Find them move people away from using telegram for privacy. Maybe like it's fine. If you use telegram, but just don't use it expecting like the best privacy because really like the features are a kind of hacked like act in there with telegram, like half-assed kind of secret chats are somewhat secure but the burn like wasn't implemented properly like wouldn't even burn the files like we're talking about in this issue. Those secret chats aren't even supported on all clients. It's only one on one. You can't have like group encrypted communication. Your phone numbers easily exposed to contacts. So it's like if you're planning on using like a privacy focused messenger, I would just say like telegram probably isn't what you should be using. Generally speaking. I did kind of cut you off there. So you did you have any did you ever finish a
co-author? No, not really. I guess I will bench out of chat s cogs mentioned you can save a file video only in memory which it kind of argue that what you can dump the memory and get it in as they respond in chat there. It is harder to get the file that way but the same time that's kind of my point where you keep adding these layers of Defense on something like this. You can keep adding layers ultimately. There is a way around exactly where you should Staff is kind of up to the application. I'm not saying they shouldn't go any further. Like if they delete it, that's fine. But the fact they weren't even deleting. It is like deleting. It is a minimum thing. So this is this is like a very fundamental failure. Yeah.
So our next post is about supply chain attacks and how someone use confusion with dependency package names to install malicious. I'm using air quotes there because they weren't really malicious. They just use it to notify them when code was ran, but they could have been malicious from someone in that was really trying to attack these companies and they were able to use this type of attack to hit PayPal initially which prompted them to check other companies when they found it also affected Shopify Cloud Apple ID. Microsoft Azure Netflix and a bunch of other companies like probably a bunch of companies that weren't even tested that would be vulnerable to the same type of thing and it's funny because while this attack is very impactful obviously with being able to hit those types of companies. It's also very like trivial and how it works in essence. It's just about abusing name collisions between private and public packages like repositories for packages to get Melissa packages uploaded. Add onto public repositories and installed over top of the private ones and this attack works on npm paepae rubygems. I'm not entirely sure how it works on npm, but on rubygems and pie pie, at least if a package of the same name exists on a private and public repository it'll install whichever one has the higher version number which really isn't a barrier at all because the attacker can control the version number and their malicious version of the package on the public Repository.
Gets one key thing to point out those with paipai. It did require that they were using a specific that they were using this argument extra index the URL to add their well extra index the search. If they were using just index URL, it would entirely replace the public half. Which would basically prevent the vulnerability by not having the public half of it. It's just because they did basically use extra index URL. That would be no it's search all of the available indexed urls.
Yeah, so there were those issues on rubygems and pie pie on mpm that he didn't really go through how it works but it seems the public package takes priority over the private ones. I wasn't quite sure on that point. I don't know. I didn't know if you maybe had better clarification on that zi.
I don't have better clarification. What I can say though is that mpm. I believe is a bit more difficult to actually run a private repository on Unless you're using there are a few like software as a service or just software packages that do kind of provide a private mpm. I believe but I don't believe there's anything as easy as just like the extra index URL you have with pie pie. Okay, but that I know that doesn't really answer your question. It's just I think that's part of why we didn't get too many or too much information about it is because it can be done just kind of in several different ways basically and it depends on how somebody decides to implement that at least. That's my understanding. I've not tried to run a private
npm. Fair enough, but an npm is case it actually it gets a little bit more impactful to because with npm not only can you get an arbitrary package used if a company has a setup like that. You can also run code just on the install phase of the package with npm through the pre install scripts, which I'm not sure if rubygems has something like that but I know npm does because they're used a ton for things like printing out. Please send me money. It's not like that. At so he used that actually to collect information about the machines that they're installed on. So yeah using this very trivial attack. He was able to supply chain attack companies like apple Microsoft Tesla Shopify Netflix and others
Spotify Apple Pay Pal gave a 30,000 dollar Bounty and Azure with Azure artifacts gave a $40,000 Bounty. So we made quite a bit from it.
Yeah. Yeah, I mean that's a hundred and thirty K total payout for these issues. So if that doesn't tell you like how how impactful this type of attack is, this was just like one of those nasty type of attacks that was lurking beneath the surface and is one of those things that makes these massive dependency chains so dangerous. I mean we talked about it fairly recently. I think in the last episode when we were talking about the the post by Google that went into this, but it's not only are you relying Lying on packages that you might not fully trust or even though the existence of do these chain dependencies, but you also have an issue where if any of those packages are compromised you're screwed basically and that's what this attack facilitates. It facilitates a packing those packages by just having that Collision on public and private repositories. So yeah, I mean, this is a new attack, but it's also one that's people of thought would be an issue for
controlled. Yeah, so just one of those things where people suspected it would probably happen or something like this attack was possible and anti it's happened. It's hit some of the big company. So I mean, what's interesting about this is this is probably affecting like millions of well, I don't know if I'd say Millions because I don't know how often private repositories are used, but it could be affecting like a lot of companies definitely more than what's mentioned and year and they just might not know it so Yeah, I mean that this is just something that's lurking beneath the surface that could be taken advantage of so, yeah, just an excellent blog post and one of the most impactful attacks. I think that we've seen in a while.
Up next we have an issue in Libra and Ms. Which is a larval PHP based solution for a network monitoring according to show Dan showed and search there's about 3,000 instances of this running publicly. And the issue is a second-order sqli through the passing of rock queries to the eloquent object relationship manager or orm and that raw keyword there with rock queries is important because usually We'll handle sanitization for you. But by constructing Rock queries, you're basically saying let me have the flexibility to specify what I want since I want to make complex query. So in their web panel, they have a top devices controller and and this attacker controlled sort parameter ends up getting passed into it through two methods, which is give you get traffic data and get processor data that mean
the dress. It's here really just comes down to the fact the application like every user is able to add these widgets to their own dashboard. One of these widgets as Specter was just mentioning the top devices. Yeah that way Jake you can add your own set or you can have your own settings on it. So this isn't like a highly privileged setting area. It's just you can set like hey for my widget on my dashboard. I wanted to sort using this field or this direction actually is what it's expecting you to do as you can see with the order by raw. We re her well, Part of the query there. They just inject that's or value which they get directly from the settings. They just inject that right in there so you can inject anything you want because it doesn't validate the the settings anywhere. You can just enter whatever you
want. Yeah, but that settings Json blob is like the Pinnacle of what's going on here. So the attacker the reason it's a second-order attack is an attacker would send one request to cause the injection being the creation of that widget. And then when the widget gets rendered it sends the second request which triggers the injection itself.
Yeah, I mean, so second order in this case. It's kind of like a store at SQL injection in a sense. Where you stored the data and then something else is pulling that data out. So you're not directly making the call that's vulnerable. Something else is kind of using it. That's where the second order
comes from. A first order would be where
you have direct control of the value as it's being injected this kind of a step removed from that which is kind of interesting. I will mention that the sort parameter in general is actually a decent place to look for and a second-order issues like this. Well, it's not just second-order. Sorry. I'm it's a good place to look for the sorts of SQL injections because often If you're just using like the parameterised queries directly, it's a little bit more difficult to bind a variable to the sort order or even to the sort options. Like usually it's possible with whatever you're using for the parameterised queries, but it's usually done some sort of separate way. Now. This is less of an issue if you're using something like a or am as is the case here or if you're being like an active record set up if you're doing that. Usually they'll provide a direct way to bind a variable on the sort like that will just be part of kind of the framework or library or whatever but just because of the fact that it's always kind of a little bit special way it is. Done manually. It's a really common place to run into SQL injections. Even as parameterised queries are being widely used. So
yeah, I mean I was going to say this gets a bit interesting when it gets to take advantage of the sqli because usually was sort like you're going to be using like an ascending or descending value. But because the order by query access more than one column you can inject through the extra columns. So to demonstrate that working they made the payload order by ascending then added select sleep 10 from Dual ascending to the query now that payload didn't work. If you didn't have permission for any of the devices due to the It's circling in the query but they used another known trick to get around that they use the procedure analyze statements to get that ran which I actually I didn't know about that trick. I didn't know if you did zi from like work you've done in the past, but that one was kind of new to me. But then again, I'm you know, I'm more I'm more of a noob when it comes to the web space. So maybe that's that's just me and that's used more often than the oh
man. I see are you talking about like where they do the benchmark? Okay. Yeah. No, I know that's I think that's even on like a number of cheat sheets for SQL injections.
Yeah fair enough, but I thought it was cool that pointed out that trick for exploitation. They just use SQL map though, which found the injection points mentioned and exported them to dump the user table as a bonus. They point of this could be used as a dose by running an insane sleep or Benchmark query but as pointed out by somebody from chat you Need authentication to be able to exploit this issue. So despite the show Dan searches showing like 3K results, you need another bug or like some poorly configured instance to be able to exploit the issue
and it has one two things include this one on the podcast this week just because we don't see the second order issues too
often. Yeah, it's been a little while since we've covered one. I think. So but yeah, this issue was fixed. It was reported on January 6th and fixed on January 6 as well. So very quick turnaround time there. The fix was shipped on February 2nd. So that's that's why this was able to be disclosed last week. But yeah like you were saying like if you don't always get to see some of the second-order stuff, so it's nice to cover when it pops up. Keeping on the topic of hitting web panels. We have another for vulnerabilities this time in Palo Alto networks next-gen firewall, which is used in the Enterprise environment. This is from PT swarm. So yeah, like I said, they found four issues starting with the most impactful issue or most impactful issues rather. We have a blind OS command injection from authorized user through the external Dynamic list functionality for Sorting objects from external web server in the source field. You can pass a result query that'll you can just break out of the command using back ticks and and get it executed which will replace the contents from that command and the get request so it's blind because you can't see that get request after it's been interpreted by the server. You need to be able to view logs or something to exfiltrate that data, but you can get command injection and it's pretty straightforward to do. The second issue is another command injection through a different injection point this was through the the rest API again, it requires authentication. The issue was in an internal Handler for building operation requests. It took a CMD argument and builds it into an XML request to send to the Management Service and at then took that data and inserted it into like been sh Dash C command for execution, which means this feature was Is actually intended to run commands, but it was intended to be like a filtered set of commands. There was some checking and filtering done on the service side. I don't really
include what that filtering and checking was though.
No, they're kind of vague on that point.
So they end up mentioning here that they you see data. Just putting the actual command in see data ended up bypass whatever filter so I'm not sure if if the filtering was actually meant to restrict the types of and to being
done Yeah, they're a little bit vague. But like you mentioned that you see data tags get around that which we've covered before using C data tags for that kind of like a vision. But yeah, so there were those were the two command injections the third bug wasn't authenticated thassos due to a weird functionality of allowing file uploads that they allow file uploads on this are not authenticated endpoint and it'll try to delete them and trigger a clean up if a 400 error or 500 errors returned, but when they send a post request to do it, they they they get it 200 so that file doesn't actually end up getting cleaned up. So if they just Spam the server with files to But this the disc I think you just run it out of disk space and that'll end up causing issues when you try to log in because they assume it's because PHP couldn't create a session file on disk. So yeah that it's a Das but it's an unauthenticated do so. It's definitely noteworthy. The final issue was a reflected xss in the change password page, which can again be reached without authorization. They inject a PHP self server variable and The page now I was a little bit confused on this because I was thinking okay the server variable in PHP. You're not supposed to be able to control this as a user but Pete zi actually pointed out to me that PHP self is kind of special in the way it works because it takes the the URL that you pass it. So well, that's where you can give that injection.
It's not too special actually. Most of what's in server is just stuff that's from the server's environment. So PHP self is the age being Requested but you've also got like the query parameters are in there all of the headers you Center in there. So that's another thing where an attacker would have control like a lot inside of the server Global variables is attacker controlled or at least partially attack controlled. So that's not too uncommon. It just it is generated and it's relevant to like the server's environment. There's also like the get and post and request Global variables or even got a global variable called Global which can be used to access any other variable that's been
defined. Yeah, so I guess the reason you don't see this more often probably as because they're not like most people don't Echo out the server variable directly to the page like that that does seem like so that they mentioned the reason they do it I think is to make sure that it returns to the same end point that it came from by using the server to find a path. But yeah, I don't know if I've ever like seen that anywhere before so that was a little bit strange. It's translation
strange there, but an easier way to Get it to hit or to get a form to hit the same URL that it's on is to just not include an action the default. Is that the same
page? Yeah, so it's a bit of strange code that that ended up biting them there. But yeah that the reflected xss is probably the least interesting issue there the command injections are big fish. But yeah, I mean this was a cool set of attacks. Palo Alto, I find it interesting that it had Palo Alto because I've definitely seen people who have used that software. It's pretty common from what I know when you get to like Enterprise environments for it and stuff. So yeah cool to see some issues in that so getting back to some industrial Control Systems. We have a rapid 7 post. This was a critical vulnerability and advantech I view which is an P based iot device management application which of course so yeah, it's used by industrial Control Systems. So it also received an advisory by sisa. That's the cyber security and infrastructure Security Agency and this I view application runs as system and it exposes the capability to modify the configuration file with apparently no authentication, which is really strange.
Yeah. That was so this whole report I Feel like they they should have like with how they did this report. They really should have just released the script that did it because that's all you really have in this they don't you don't get any real information about the vulnerability or what happened. I mean they walk through a little bit but you know You have to kind of look at what all the page actions are in order to get what they're doing. Like they don't explain a lot it just update explore path. But in terms of like the authentication was a missing was there some bypass was it just these Pages they don't explain any of that. So we don't know of they just don't have any authentication here or if it's just these pages that didn't happen or just happen to be these pages that could be abused.
Yeah, I didn't want information on there. Yeah, but one of the things you can change in that configuration, I'm just going to assume that there's no authentication on it. But that's just the Assumption I'm going off of that
exactly is no way that the title is missing aesthetic eight. So that's that is clear.
Yeah, it's more. I mean why the nuances thing? Yeah. Yeah, exactly. So yeah, I'm just going to ignore some of those nuances while covering it but basically one of the things you can change and that Figuration is the Excel export path for JSP subs and by changing the export path to the web apps. I've you directory you can use the export inventory table functionality to basically write JSP in through the column list, which will get written to the file name you specified through the filename parameter on that end point, which they don't do any filtering on you can just include a JSP file extension on that file name, so So yeah, they can get a JSP file written to the public directory which essentially acts like a reverse shell in a way you visit that page to execute arbitrary commands to the C parameter. They set up.
Yeah, and I mean you could do whatever you want. This is a pretty classic DSP shell. Yeah,
so I interpret this as actually two issues like they mention the missing authentication, but I think they should probably have better validation on the inventory table and point as well like Is dumping to a JSP file like you shouldn't be able to include the file extension arbitrarily, but I don't know. Maybe there's another what's their names
for you if that was even more ideal case. It ain't like the column names. I don't know exporting seems to be a hard problem for a lot of people. That's definitely it's definitely a bug Farm in a lot of cases. I've done like this sort of attack on like Excel stop and see us if he's a tone of it is it's one of those things though. Like it doesn't seem like it should be that difficult to get it, right. Like said checking the phone and he's making sure you're writing to the right place. I'd say, you know, this generate a file name for it. I'm stamp. What it is that whatever the format is do that. You're going to be safe giving people control over that information just it opens you up to new level of attack validating The Columns being used seems like a obviously angle also not giving the max to just include whatever information they want, but But depending on the level of control blade have over the data that may or may not actually matter.
Yeah, I mean setting aside the security issues even when it comes to like the The Dumping and exporting functionality. I feel like just getting a filename generated makes it easier all around not only to handle properly and securely but even if I'm like a user it I'd like a file named be filled out for me. Honestly, maybe that's just because I'm lazy, but I don't really feel like I should need to control the you know, why my file that gets exported
the you I might be including that. And then very well could ya it's just this endpoint that actually does take a file name, but it's not actually exposed to the
user. Yeah, now generally even though these issues are like pretty serious issues. This interface shouldn't be sitting out on the internet and is usually kept internal but as we've seen with ICS and just things in general sometimes things are Exposed on the internet for absolutely no valid reason and if it was this could lead to an attack on basically any Associated smartworks enable Network device. So
yeah, and it's also the aspect of Bike a hard outer shell but like no defense is inside so get to the perimeter and you kind of have free game on
everything. Yeah, you're basically putting all your eggs in one basket at that point of being isolated,
which is never good idea. Yeah, it's not an ideal situation to be and usually want to have even your internal devices even if they're not going to be hit as often or as accessible. It's still ideal to have them lock down as much as you can.
Just another bog standard a and Industrial Control Systems. All right, so we'll move into some of our binary topics. This was a blog post from du Bap security about a zero day. They found being exploited in the wild from the bitter apt group. It's a win32 ge1 32k vulnerability, which is not really that surprising when 32k is kind of notorious for being a bug farm and it was it was the Body was designed to Target Windows 10 1909 operating systems or rather the exploit was but it could also affect 20h to systems as well, which is like really recent. It was fixed in the February 20 21 security update from Microsoft. So this blog post goes into some of the technical details and some of the meta-analysis of the zero day as well stating that if this was a highly sophisticated exploit, it was almost a hundred percent stable it tried to do cleanup to prevent crashes. Has it wouldn't get caught by type base isolation or the driver verifier and also does some checks against the target machine against like the OS build version checks for specific anti virus software. Basically, it probes its environment. So it's definitely a specific like a specially targeted attack which which is always interesting. But yeah getting into the actual issues of the bug it's a type confusion on the flags field and need WD extra. When creating a window which can lead to an out-of-bounds read on a free call. So basically when you create a window on Windows, you can create a window with extra bytes and it will call this user mode callback that will allow you to like do whatever you want to do with that bites field. And then the return of that callback is stored in the WD extra field on the Kernel side as a pointer. Now if inside of that call back they call the NT. ER because the console control function with the handle of the current window you can basically change that field to an offset which gets the flag set that makes the field get interpreted as an offset later. You can overwrite that offsets field value with the return of the call back after that fly get set. So you basically end up with this D synchronization where the flag gets set and then the the value gets swapped afterwards, but the flag doesn't get updated so when the window gets destroyed Uses that malicious offset when framing the address and that leads to an out-of-bounds out-of-bounds data getting passed to the RTL freaky function, which I believe would basically give you like an arbitrary free type primitive. I believe that use that to corrupt another window object that they spray to get more controlled out of bounds right by Smashing the window size which they then use to also replace an SP menu with a fake one to get an arbitrary read primitive, which is kind of interesting because that's a method that wasn't previously known at least not publicly. So there's some cool takeaways in this blog post that could be used in other exploits as well.
There are I do like this initial attack tired if initial vulnerability. Yeah. It's kind of interesting just because of that callback being there you're making a further call on the same object and that is that's where the D sinks coming from. I find that to be an interesting attack surface. Like a totally makes sense why you're likely to have vulnerabilities in something like that because it's not it's expecting the user line call back. Maybe kind of Performing normally would not be trying to edit itself yet further and pretending that's a fully allocated
object. Yeah, I mean like you said like I think this was a very cool bug. It's unfortunate that it was discovered being used in the wild. But the the like the bug in the export itself is really cool because not only is the nature of the bug meat like the fact that it can offer like almost a hundred percent reliability is also pretty crazy when you're talking about like Colonel and modern systems. That's that's not like a condition that always happens or anything. So this this seems like an extenuating like again, exploit And that's even leaving aside the fact that it also has that that additional like info league in arbitrary read primitive. So yeah, it's a cool exploit. I will say though like like I said earlier this bug comes out of win32 K, which is a bit like a free spot of a bingo card and because of that it has been cornered off pretty well by Microsoft at this point a geum and chrome do not use when 32k. I don't think Firefox does anymore either though. I might be Wrong on that so it will be interested. Like it's it'd be interesting to hear what the user land entry point was for this chain that was found in the wild because like I said Microsoft has been trying to isolate things off from being able to even touch one 32k if they can't remove light Reliance on an entirely they try to move things too like I PC or whatever. So yeah. I don't think they've mentioned in here with the user land entry was for
could be just entirely separate also like once they get coated execution and the ability to spawn a new user line process Then they've kind of escape the sandbox with and that's assuming there was a Sandbox to I mean, they're definitely use case for this that just want involved being
sandbox. For sure and considering this was apparently a targeted attack. It makes a ton of sense. I'm just saying like it's cool. As this exploit is in the school's the bug is it's not something you're going to be able to hit from like any context. There are some circumstances around being able to hit this issue. I did struggle a little bit with reading this fried up. I had to like go through and reread it and I had to like reference the diagram and work through it in my I think that was just a bit of a language barrier as you can probably tell if you're seeing the stream, it's not a it's a I think it's a Chinese is a Chinese. I'm I'm actually
got experience. So
Chinese. Yeah. Yeah, so there might be a bit of a language barrier when reading it. But yeah, this was definitely a very cool book. And keeping on the topic of Colonel and fun
issues and it's not enjoyable write-offs.
We have a sin active post about an iOS Colonel vulnerability, which was CDE 2021 1782. So this was a colonel bug in the user data get value function for redeeming Mac vouchers mock vouchers, as I understand it are basically they're immutable resources which are passed through messages and IPC. The entire like mock IPC subsystem for iOS is a beast of its own. It's very like complicated and the voucher system is also complicated. So yeah, I believe the term voucher comes from the fact that it's it's it's a limited resource that can only be used like however many times it's valid for but I mean, yeah apple just loves inventing like new terms and applying them to their code base, but those Vouchers they can basically they can have like attributes such as IPC importance and priority Banks, which I'm not really sure what they mean by like data banks, but they don't they they leave that as out of scope for the article and user data and that user data is where the issue comes into play. That user data is passed into the user data value element function which store some other information such as the hash and checksum and size and reference count. And the reference count is where the issue comes into play with made. It seems the problem comes in when Redeeming the voucher access to that user data isn't locked which leads to a race condition on that reference count. And the problem with that is the elements reference count is supposed to stay. They in lockstep with its parents parent vouchers reference count that way if a race happens somewhere else when releasing the voucher it will not free the resource from underneath the other caller. So if you can break that synchronization, you can basically get the vouchers reference count higher than the elements reference count and the way you can break that is by exploiting that race condition you race two threads that do the exact same thing which is storing user data and redeeming a voucher and because the reference count update isn't We're locked in any way shape or form. It's possible that a reference gets lost essentially by using a stale value and one of the threads so it goes to increment the reference counter. But what if both threads fetch the same value before incrementing it then one of the threads the reference count doesn't get updated properly. So when the element reference count is less than the voucher count by one that opens up issues and other paths because when that same user data store function 1D replacing the voucher value and that gets called it fetches the value from the resource manager and it takes a reference on that value under normal circumstances. This means that the element reference counter would actually be one higher than the voucher reference. I mean it wouldn't be in sync which is totally assumed and expected by the code because if another thread goes and tries to release that voucher from underneath the thread replacing the voucher value, it would only free the value of the voucher and then the element reference counts match. There's that if statement in the in the release function, so in this routine when they're desynchronized use after free shouldn't be possible, but because of that issue where you could corrupt the reference count with the previous race condition. You can basically make it. So instead of getting D synced. It actually synchronizes and that brings that uaf possibility back into the fold it now means that another thread can destroy the port and get the value freed when it shouldn't be able to which leads to that use after free. So it's a very complex issue. I might have made a mistake or something like I might have butchered something when I was explaining it. When you're talking about iOS and the mock subsystem, it's it gets crazy when you're talking about like background information about that subsystem. If I did make a mistake, feel free to point it out in the comments and call me an idiot or whatever. But
as I understand it be honest like this. This whole document was really dense to kind of follow what the issue was. I did not have fun trying to do anything
yet. You're trying to read it for sure, but other As I understand it. It's basically dink D synchronizing the state to cause an unexpected desynchronization to do the opposite of what it's expected to do. It synchronizes a state where it shouldn't basically they managed to use this to trigger an info leak on iOS 13 by spraying OS data over the user data element and getting it read out when extracting the recipe from the voucher. It does not this strategy did not work on iOS 14 because of allocator changes. I'm They have them in like different. He Perrine has now or something iOS 14. They did find a different object. They could spray with being this other type of Port but there's a misalignment with the element size on the port side BC pointer which leads to a panic. So it's it's mainly just the POC that the bug still exists, but it's not nearly as useful of an issue with that strategy.
Yeah, so parents. Is that still theme like a bug that could be exploitable. You just have to figure out more about what what's available for that other you.
You would have to do a lot of
research. Yeah, it'll take some research but it does feel like something that would be exploitable still. Yes. Got a figure out how Yeah,
so I won't lie. I'm a little bit confused on the Practical aspect of this even if you did manage to find another out object though, and it is absolutely practical because this exploit apparently was found in the wild and it was changed with a webkit bug which is also worth calling out because it means that this is hittable from inside of the sandbox, but the race for that increment to get skipped seems insanely tight to me because it's one line of code. It's not like the Discount is getting cached on the stack or something like that. It's getting directly operated on in one line of code. My guess is the only reason this is actually hittable in the real world is because it's arm so on x86 like if you have an increment, its I think it's going to be one instruction on arm. It will probably be broken down into multiple instructions and perhaps the other thing that helps your as the iPhone chips are going to be slower than what you'd see on like a desktop PC because I've heard exploit races with Windows this type on Modern x86 chips and it would not happen. So this is a like insanely tight race, but it does seem from like talking around and looking at stuff that this is actually a very reliable bug to hit. I just don't really fully understand why it all honesty. It's kind of its kind of magical in that regard. But yeah, it's another A bug just like the windows one. That's very cool and complex in nature. I just wish the like you were saying like I I wish the right up was a little bit better. I found myself having to reread certain parts and note things and like a notepad to try to keep track of the state in my head. There was like no like I feel like a diagram would have done wonders in this right up. Just trying to show like the control flow of what's happening. There are no diagrams like that. It's pure text Walls and code Snippets and types being used. Like if you're not familiar with apple terminology and the stuff with their IPC subsystem, you're going to find this very tough to read which is unfortunate because the issue is extremely cool. So I would say it is worth trying to read through it to look more into the issue because it's a really awesome issue. But yeah, a diagram would have done like so much here, I
think. Out of chat SK s cogs. I think I set that differently from the first time. I tried to read it. But I'll they mention that the recent work on iOS 14 was due to an ASL are change. Do you know anything about that? Because I know you already mentioned that you believed it was due to allocate or
changes. Yeah. I thought they would be allocating it like a different Heap Arena or something so you couldn't get that overlap anymore for the you for the uaf. Yeah,
so you have different. Checks in there I'd and look into why didn't work. So I'm not sure about that,
but Yeah, I hadn't heard of the iOS 14 SLR stuff. I'm so to clarify it for anybody listening. Like I'm not an iOS researcher. I have never liked exploited the no day or something on iOS. It's not really my my area. I do know a little bit about it through because I find it interesting. I rewrite up some stuff and I talk to people who do iOS research, but it's definitely not my field.
So he does mention chat there. That's a re randomization. of the shared cache region So that would make sense as to why it's not working on. I was 14 then and that. I never could figure out allocator understanding not not exactly but
yeah, I can kind of see how that would Loosely fit in, you know. All right. Thanks for Thanks for pointing that out s cogs from from chat, but that's a that's a good point to shut out. But yeah overall cool issue could use better right up though.
Yeah, the this is like the last one too. I think the core bug is just kind of a fun little bug and interesting thing. So I getting that descending to do basically two races to if the exploit which kind of makes it a need exploit to I guess but um, Yeah, it's I just had such a hard time trying to read the actual
right up. Yeah, something fun about that bug that I pointed out yesterday because I got into a bit of a discussion about like fussing which we've had that discussion ourselves on like in discussion videos before actually but like this is the type of bug this multi-stage issue that would like never be discovered by fuzzing did this like II pretty much guarantee you this was found with manual review. I don't see a way in hell that this could have been discovered by a bus. Maybe very very small chance like probably higher chance of hitting the lottery honestly, but yeah, this is just because it's so deeply connected to this state and like order of operations and stuff. That's why I love those types of bugs is because there's no way buzzers are catching that it requires like that Manuel research and understanding and And yeah, those types of bugs are more fun to me so we can move into our research segment of the show. First up. We have Miss using service workers for privacy leakage. So this one focuses around describing a history sniffing attack, which would be service workers in popular browsers like Chrome and Firefox. So service workers for those not familiar with the terminology. They're basically like proxies that run separately. From the main browser and they intercept network network requests cashing stuff from that request and delivering push push messages. My blue-ber, at least I use for like those push notifications.
Yeah. That's at least one way. They can be used a fact like they can do a lot of things they can pull down more information. They don't necessarily need to be intercepting request. But under 70 is kind of one of those key features. But yeah, this is all this is all going to be a 10 DSS which is next week, I believe. Yeah, I believe that's next are starts on the 21st. So this is still a free publication of the paper kind of an interesting package said it's basically a side Channel based on the service workers. We've got the service workers. They may if they handle a request then they're going to respond something that is a little bit or they're going to respond maybe a little bit quicker because oftentimes these are like pulling from a cash when they actually do Intercept in. To something for you. So what they end up finding was that when being used out of an iframe or when they would iframe one of these Pages they would be to detect whether or not a service worker was responding to any of the page request. And they fake cover two different ways if they actually determine whether or not the service worker was handling it one of them being. This performance API and the other one being more of a classic timing based now the performance API just seems like one of the values in there. I believe it was like a work. Do you recall what the value was they were looking for? Sorry workers star value that will be nonzero if the worker and of replying so basically just leaks directly whether or not the worker reply to the request and the timing based one is a little bit more of an obvious timing attack. If a service workers are applying to this and has that page already cash that's replying to it's going to load a lot faster. So a fairly obvious timing attack there. But that's effectively the gist of the attack using that you're able to determine whether or not a user has already visited a page and thus having it in the cash for to respond. They go into a little bit more about getting the timing right. So trying to eliminate the startup time for the service worker some things like that to basically get more accurate timing information. But the gist of the attack is just leaking that information out of the fact that service workers do respond to pages and they're able to Take that information somehow which I thought was definitely an interesting way to get it. We've I feel like we talked last week about another privacy issue. As out of town, so they're asking is this a break private browser mode, I believe so, I believe this is generally just about how service workers work in general. I don't believe this is even browser-specific. I believe this is basically break and just how service workers are designed to work. Completely do this. Yeah break them anywhere. They are now service workers. If you're talking by private browser mode like Incognito. I don't think the service workers will be installed in your Incognito browser unless you like visited while you were in that Incognito session. That might be something to look into but I don't believe the service worker will kind of break it that way. But I like a the service worker was there it would be to get this information. I'll mention sleeking history. Like you could just know somebody visit the page as long as that page would have had a service worker response, which is a bit of an ask some some pages baby won't have the service worker there. So you're not going to be able to leak whether or not a service worker response because there is none the first place but they did find that of very significant number of major websites will use a service worker so you are able to use this but it is worth noting that that is a limitation.
Yeah, so it's basically abusing something from the specification and and jumping back to what you were saying with covering kind of a similar vein of an issue recently. I don't think it was last episode, but it was episode 62. I think you're thinking of the excess leaks and redirect flows is that we were thinking of maybe You know, I don't use of I
don't know exactly what I was thinking up. I just have a vague memory of covering something in the covering the history thing. But yeah now that you mention it was probably that excess leak one.
Yeah, so that was episode 62 but yeah, like it school paper. It's one of those abusing benign functionalities and unintended way or seemingly benign anyway, so yeah, it's not like one of those technical bugs where it's like, oh there's an integer overflow. Or something it's you know, it's just not considering the way that functionality can be used that that led to you know, both those issues with that being said was this mitigated at all. I don't remember if they said that I
didn't see anything mentioning that mitigation and it it would be hard and mitigate. It's going to require changing the actual the actual spec.
Yeah, so in their conclusion, they do state in an effort to protect users. We disclose we disclose our findings to the effect of vendors. There are remediation efforts currently taking place including plans for exploring or redesign of chromium site isolation mechanism. So like you were saying it seems like it would be something that we require like a massive overhaul to try to fix they said they do have a countermeasure though an access control base. Sure to mitigate the impact full tax while the browser remediation efforts are underway. So yeah, there's like a stopgap put in place but it seems like something that's going to take some work to to fully resolve. But yeah, I mean cool attacking and it does have some practical implications to maybe it's not an easy attack pull off because of that Reliance on requiring the site to you service workers. But yeah, I mean, it's definitely something to
consider none that really just limiting number of websites. That are going to be vulnerable to this. But yeah, as long as like that still a large number of websites mean service workers aren't exactly an obscure feature or any major website so you can generally expect that to exist. For sure. With
that we will move into our last topic of the day before shutouts. That's security things and Linux deep. I point out 5.8 from Keys Cooks. So we've covered a few of these in the past. They usually do them with the major or even I guess the minor technically Linux releases two notable mitigations in this one that were introduced but there's a slew of new stuff arm 64 Branch Target identification support has been added. Arm 64 Shadow call stack as well. There were also some new capabilities added like cap EPF to separate from cap. Sysadmin. So Network RNG improvements, but like I said, the big two are the arm 64 mitigations of bti and and Shadow call stack. So
and then we're definitely kind of big news here at South of exploitability.
Yeah, so Branch Target identification is enabled now in both useful and and Colonel and and what that is is it's basically like a low granularity forward Edge CFI its aim is to kill jump oriented programming Workshop making its that you have to use direct function calls. You can't just jump directly to instructions anymore. It checks that they're actually like valid functions, which I will say that will probably be stronger and user The know anything when you get to Colonel, there's so many options that you have where I'm not sure if this really does too much not only are there like single functions that you can use as wind functions that you can call that are powerful but data only attacks are also like a lot more common when you get to Colonel lens so BTW, I think won't be as much of an issue in Colonel
true. But this is a first step towards a more fine-grained. Yeah fi and it is worth mentioning. Seeing as this is for CFI Specter kind of Matrix for forward Edge. What that means is it's the forward Edge and you have backward Edge CFI, and we actually shadow call staff. That's the other one that deals with the backward Edge. So Ford Edge, it's literally like your jump your branch instructions are moving forward and backwards is returning. I just wanted to clarify that if you're not familiar with those terms I figured we should maybe call that out. So what this is attempting to do is make sure any of those indirect jumps at do. system code they should only be able to jump to places where they're supposed to jump or no one to be allowed to jump to so obviously when you're doing something like a job chain, you're trying to you're generally going to be having a jump to just some little tiny Gadget. That's just going to do like a couple things that they wouldn't normally be jumping to do by adding this instructions able to take a look at where you're jumping at. Does it have this instruction before it? Yes. Okay, and let's jump go if it doesn't it rejects the jump and I'm actually not sure exactly what what arm will do if it'll cause an exception or what but it always Panic. Yeah. Well the colonel might panic in response to that yet, but hard work exactly how race it. I'm not actually sure forearm. I haven't played around with arm too much either way, like it'll stop the execution. This is hardware level. Are this is a hard word level implementation, so it's going to be fairly strong. However, the weak point is the fact that in this setup. Everything is being in like just any possible Target is allowed. It's not doing any sort of fine-grained check to be like this jump is allowed to jump to these locations. It's some jumpers allowed to jump here. Therefore all jumps are allowed to go there. I mean that that is generally the weakness with a lot of CF is the higher the granularity that you have. Usually the more overhead you also have which is why you don't see it too much.
Yeah, we actually I wish I remember which episode but we covered a paper that went into the nuances of how much overhead you start incurring when you add that higher granular granularity CFI,
yeah, well we've covered I think we've covered more than one research paper covering CFI.
Yeah, but I mean specifically in Colonel, I think there was one that investigated it was like a custom Linux kernel fork or something though that might have been a software implementation so it was probably more severe of an impact But yeah, it's definitely been a trade-off discussion that's been had before. Yeah.
So yeah, and then I guess you can go on with cattle calls that guy
so yeah, so Shadow call stack gets a second one this aims to protect Colonel against Rock through backward Edge CFI. So PTI hits job shadow call stack, it's Rob. We've talked a little bit about Shadow call Stacks before basically, there's a second stack pointer that gets tracked which tracks the return values and that stack is what actually gets used for the return jumps. So even if you Stack through like a stack Overflow or something. The return pointer isn't actually going to get used from there. It's going to get used from the protected Shadow stack as it were what's worth noting here though is Shadow call stack is a software implementation. Not a hardware one unlike PTI. So if the register value or location of that stack is compromised. This vide ation can be broken because that memory is writable. Whereas if this was enforced by the hardware that would make it to that. That memory couldn't be just written to by anybody. It would make it so it can only be written to by the probably the processor itself actually. So
yeah, my understanding is like CT will Implement that on. On your actual call instructions, it'll do that. So nobody actually needs to be able to write to that. I would have chat yesha, Quarry as how much longer till it's done assume you're asking about this. This is something that is in Linux 5.8. I don't believe 5.8 is a Is getting a star for long-term support, I think 5.4 is kind of the current long-term support right now. Honestly, 5.10 is the next long-term support one. I believe at least I know Android there's an Android 12 Branch based on 5.10 now and they skipped everything between 5.4 and 5.10. So I'm assuming 5.10 is the next one that will have some long-term support so we can basically expect to see at in the coming and a couple years, I guess since we are talking actually about arm 64 specifically. Say, you know when they plant actually make the shift on to 510. Zi, they haven't even hit by Drillers yet. So we're at least a year out from seeing this on Android.
Yeah, I mean a lot of android devices are still running 4.14.
So yeah, because 55.4 was just more recent like it's just Android 11 and 12. I think that are hitting 5.4
and even then I don't think a lot of devices are hitting it until a month or two down the line. I think a lot of the lights are still on the earlier kernel version.
Yeah, so I you've got time before we see this. On Android but I mean it's coming up. I've been surprised by the recent uptake of the Shadow calls that just because that has been kind of know when the bell for a very long time like, you know Shadow stack. Is that the new idea so it's just interesting to see that is being adopted here quite a bit in the last
couple years. Yeah, I don't know maybe it's just because Hardware like performance has gotten to the point where those sacrifices can be made or the sacrifice has been minimized to the point where they don't care anymore. That's like my best guess they're
playing does that mean this is the clang implementation of the Shadow call stack. They're using could be another thing that you're saying is a matured. It's actually becoming option to do some of these things.
Yeah clangs really been on the uptake for for trying to like innovate and add those security features which is why Android has switched all of their build systems to clang from GCC GCC is basically obsolete now in terms of Android and I think I think Google's kind of leading that charge. I think there's more projects that are switching over to lvm as well. But yeah the shadow call stack stuff is basically intended to be like a stopgap. Gap mitigation until pointer authentication support is done, which is kind of interesting to pointer on authentication isn't really talk about in this post but Apple has had a pack on iOS for I think like two years now Android it's still like really seen yet. And it's both of them are running on the same like instruction set. They're both running on Eric 64 and both have Hardware support for pointer authentication, but Linux and Android the way that stuff was built. Pack is really hard to get off the ground running. In fact when they tried to implement the userland. I don't know if they resolve the issue now, but you couldn't even Fork to create a new process without panicking because because Pack would trigger so they are trying to work on getting packed into the Android kernel but it seems like it's something that's taking a lot of effort to do and it might not be something that will see for another year or something yet. So this seems to be like one of those stopgap medications like here you can have Shadow call stock until we Of a more robust system when when pack lands, it's going to be a little bit scary because the packs going to kill more than just stack overflows or something like just robbed a tax. It's going to kill it's going to kill a few bug classes. I think or it's going to make them a lot more difficult to hit.
Yeah, make them more difficult for sure. I mean, there are There are ways to deal with pasta usually come down to having other bugs and other Primitives like getting a signing primitive you need assertive or look or sound like that. Yeah, like there are ways around and dealing with path. So I think more likely what's going to do is it's just going to require, you know, right now usually might need a bug for a SLR bug for your actual PC control and then a bog of for your pack. And if might be the case. Yeah, which definitely you know, raise the bar for getting any exploits. Yeah, and pack will definitely kill some but oh, no, I'm bit more optimistic. And you know what people are able to come up with to deal with these mean there was definitely a SLR which was supposed to be a killer to which obviously hasn't done done that job. We might be kind of in touring a bit of a
hump. Yeah, so these are the kinds of things that we will be discussing in the discussion video that I mentioned top of the show is like the future of exploit ability due to mitigations like Shadow call stack and pack and and bti and stuff like that. So what we'll get into more of a nuanced discussion on that video, but here I mean, it's Linux is security has just been taking a massive leap. I think the last few years because before then I don't really think Anybody cared too much honestly a Linux when it came to security but with Google applying the pressure with what their Android vested interests and stuff like that and and the fact that arm has been so good with their Hardware mitigation uptake as well. I mean arm is been Leaps and Bounds ahead of x86 the last little while when it comes to trying to fix security issues x86 actually going the opposite direction. We're seeing more security issues coming out of it the last As couple
years so Intel has fuck it pretty hard with the speculative speculative
stuff. Yeah, exactly. Um, but yeah, just keep a lookout on Linux if you're interested in security when it comes to Lennox because those things will be coming and it will be interesting with the PTI hitting userland because they said it will hit userland and kernel that will be relevant for just So it's just another thing you'll have to deal with even if you're not hitting kernel. So yeah with that said we'll move into our shoutouts. I don't have any shoutouts. So I'll let you take this one away zi and then we'll blend off the show.
Yes. I just had a quick shout. I don't know or I don't remember if I've talked about this course on podcast before so there's a Linux Heap exploitation from Max camper. I'm pretty sure I talk about Max campers 44 con presentation. I don't remember if I talk about this course, so there's a part of 1/2 this also. I guess I can bring that up. Oh. Honestly, he's just a really good instructor. It's about Heap allocator exploitation specifically, you know P team Alex off. So all your house of attacks so last week parts to finally dropped which includes like house is Spirit. What else they have here the poison no by house of Aaron har rabbit. It was the floor basically some more advanced exploitation. Like I said, it's he's a good instructor. I haven't been through the entire course, especially not this new one yet, but I would be very surprised if it wasn't an awesome course, so I want to kind of shout it out because it did just I believe it was just last week that this came off. So definitely check it out, especially since you are able to get some discounts on them, you know, it's udemy so lots of discounts
classic the classic udemy discounts 71 percent off.
There's there's frequent discounts there. So I'd recommend taking a look at it. Even if you're not necessarily going to be running into PT Malik all that all often what I like about learning the Heap exploitation. Is a really kind of showcases the creativity that's involved with. Pretty much any sort of exploit development. In this case. It's you know, you get these overrides you get control of some now, what can you do with it? And all of these different types of techniques, they take advantage of different aspects. It's not just your override metadata and you get this, you know, you have control over allocation size and this and you what can you do with that Ono's? It was really reading Malik maleficarum years ago that opened my eyes to a lot of this creativity and just seeing the new attacks up. It's always Really interesting to me. Even if I don't run into those specific scenarios a lot. I think it's still really beneficial. Yeah. I just want to call out this course and hopefully have some other people check it out. Cool,