Hello, hello, hello, hello, and I believe the talk is beginning very soon, and my name is Shi-Lan He, and I have a Twitter handle, BlankHQD, and here's my colleague, Marco Brasi, and we are now delivering a talk on sandboxes and how do you escape from the Chrome sandbox and the Safari sandbox, and how to have a good overview of the sandboxes. So, let's get started, and here is the introduction. So, I'm Marco, hi everyone, and I'm currently a senior security researcher at KinLab of Tencent, and my main focus is vulnerability research, especially on OSX, iOS, and Android, and I'm also the same title, I will not repeat it again, and my main focus is on the bug hunting and exploiting on Unix and also Linux and other Linux platforms like Apple stuff, and to be honest, I know nothing about Windows. Thank you. Thank you. And for our team and... Thank you. We are previously known as the KinTeam, and KinTeam is a frequent winner, and I believe like eight champions in five or four years, and due to some business issues, we moved into Tencent, and we are now a research lab of Tencent, who developed QQ, WeChat, or whatever, it sort of things. And in this year, in March of this year, we got a master phone title with Tencent PC manager, and... I believe this is the first time that we let the Koreans go home, and just feed them. Okay. Okay. So here is the agenda, and as I have stated before, and we will do an introduction of sandboxes, and both the Safari sandbox on the Apple platform, and the Google Chrome sandbox on the Android platform, and we'll do the comparison and the auditing the sandboxes, and how to do the auditing, where you should look at, and we'll give some key studies on some historical vulnerabilities, and some vulnerabilities we used in this year's . And also, at the end of this talk, we will deliver like two demos of escaping the Safari sandboxes, and actually, I believe we should present like a Chrome demo, but we hope to reserve it for this year's mobile . Forgive me. And finally, we will get to the summary and the conclusions. Okay. First. Okay. So we will start with a quick introduction to what sandboxes are. So what is a sandbox? A sandbox is a very important concept in a modern operating system for security. So basically, a sandbox is a mechanism, a way to run some code that you don't trust too much in a constrained environment. So in this way, if something goes wrong inside of this code, and you're an attacker, get, for example, code execution inside of this code, the system is not totally compromised, and the adversary is still restricted inside of the sandbox. So a sandbox specifies which resource this particular program, piece of code, have access to. So I think it became a crucial component for security in the last few years. After a couple of years, I think it became a crucial component for security in the last few years, where people started noticing that it's impossible to get rid of all the bugs, especially in very complex code like WebKit and the web page renderers and Chrome and document parser. So it became very clear that a defense in depth approach must be taken. People said the browsers are a collection of used after free vulnerabilities that manage to work on HTML. Yeah. And so now in modern software, there are two strategies. The first one, obviously, is to fix bugs, but also the second one is to constrain this untrusted code inside of sandbox. So let's take a look at a couple of sandbox implementations, and Flanker will give you an introduction to the implementation of sandbox on Android. I believe in the historical years, people tend to adopt the DSA control that's at the beginning of Android. I believe it dates back to 2010 or 2009, when Android was the original boy in a small company. And it first adopted DSA control, and it's just enforced in its kernel. And the initial version of Android, each application runs in a unique UID. And the kernel will enforce its, like, file access across different UIDs. For example, application with UID A, it normally cannot access files with UID B unless the application B specifies a word readable for his files. And I believe it's easy to understand that any Linux beginners have a good understanding of the DSA control. That you cannot access files. And in Android, each application is a user. And of course, there is the exception that is called the shared UID format. And I believe this is some special cases we need to consider. But in general, each application has a different UID. And this enforces file access and resource access. And also, Android also implements some access controls on network, and you have to be granted a specific group ID. And to access a network, to access a camera, to access some device drivers, or whatever. So in conclusion, the DSA control format is very easy to understand. But it has been proven to be unflexible to deal with modern attacks. Attackers are becoming clever, clever, and are finding more and more bugs. So people are introducing the MSA control. The MSA control is originally developed by NSA. And I believe Snowden knows that. But it's not. And it's originally found in the NSA labs, and some people adopted it to the IC Android. And after, like, Android 4.3, it is introduced in the mainstream of Android. And this gives you some... The SNX gives you a chance that you can define your own resources, or define your own policy resources, and you can enforce whether a resource accesses the policy, can check, and it can tell the driver, the kernel, that you should access it, somebody should access it, or somebody should not. And only after this MSA approach is taken and then a DSA format is taken. If the MSA rejects you, you have no chance to face the DAC. And also, I believe it provides more modern way, and more elegant ways, so that you can define your own SC Linux policies. So that you can give more... or find granted control of what you can do and what somebody can do and what somebody cannot. But it is becoming more and more complex as Google continues to increase the code basis for this MSA style and it's somehow difficult to understand so that's why we are also delivering this talk. And I believe on OSX, the sandboxes also takes the MSA approach from the very beginning of the iOS and some earlier like 10.7 version of OSX. So let's turn to Marco. So now that Flanker gave an overview of Android sandbox, we will check out the OSX sandbox for the browser. Just to give you another example so we can compare the two approach. So this is the structure of the Safari sandbox. Basically, the Safari browser is split into multiple processes. And the this is because to separate responsibility and components and to segregate untrusted code in a less privileged process. So for our purposes, we can just think that there are two main process. In reality, there are more, but to simplify, it's OK. One is the UI process, which is the one which the user interact on, where you click the button and go back and input URL. And if the UI process is in the background, And the second one is in the background. And it's the web process. And it is the process responsible for rendering your webpage and handling untrusted inputs, such as JavaScript and the HTML and the images. So as you can understand, the web processes is the one that is handling them. the biggest part of untrusted content, and the UI process is just in charge to manage the other process of the browser. So, like I said, the web content process, his responsibility is to handle untrusted input and to render this untrusted input into a web page, essentially. So, usually, browser compromision, like when you get the initial code execution inside of the browser, you get inside this web content process, because maybe you have a JavaScript bug, or maybe some rendering bug or something like that, you usually get it here. But this process has a very strict sandbox, so what you need to do after to compromise the system is to escape the sandbox. And one attack surface for this sandbox is the broker interface, which is used from the web process to communicate with the UI process, because, for example, maybe the web process must open a file, but it cannot do it itself. It must ask the broker to do it through the UI process, because the web process is too much sandboxed to do it. So, the web content sandbox is implemented like any other OSX sandbox profile, and it leverages a kernel driver, which is a sandbox.kxt, and there are some files, like on Android you have a SELinux profile, also in OSX you have a sandbox definition file, and, for example, for the browser renderer, you can find it at this spot. How it works? Basically, this file defines what are your capabilities, and whenever your code executes a system call or something like that, the kernel will check, will have some callback inside of this system call, some hooks, and then it will ask the sandbox driver to evaluate your sandbox profile, and it will say to you, yes, you are allowed, or not, you are not. So, let's take a look at how a sandbox profile looks like. As you can see here, we have a first very simple snippet. It's written in this custom language, very simple to understand, very human readable, and as you can see, here there is a deny default rule, so if you are under this sandbox profile, everything for you is denied. So you have to get some white list that will come after in the file. And as you can see, after another sandbox profile is imported, system.sb, so you need to follow into this include, also to audit the sandbox profile. Recently, on OSX, there is also another addition that it was mentioning, it's the system integrity protection. So, basically, there's a system protection. On OSX, if you don't opt in for the sandbox, you are not sandboxed. But now, with the system integrity protection in a recent version of OSX, every process, even the one running as root, is running inside a global system sandbox. So, it's a new security mitigation. So, even if you have root code execution in user space, there are still some operations that you cannot do. As you can see here, I'm running a touch command as a root with sudo, but I cannot write into the system partition. This is because there is the system integrity protection avoiding this. So, how can you bypass this? Well, the best option for an attacker is to find a kernel bug, because the system integrity protection is enforced, and inside of the kernel. So, if you have a kernel code execution, you can also bypass system integrity protection, as we will see later in our demo. So, as you can see, with sandboxes like SAP, after they are introduced, even the root user mode is restricted. And you cannot say that sudo make me a sandwich. No, I won't make you a sandwich. So, now let's... Let's move to the Android or Google stuff, and there's a big difference between Android and iOS. Like, on iOS, almost every application is sandboxed by default. But in, like, Google, the Android format, they usually need to opt-in and use the isolated process feature to handle sandbox in your own process. And Chrome leveraged this isolated process feature to implement his own sandbox, and we can see that in this is android.manifest.xml file, and the Android isolated process equals true, and it means that this service, this is org.chromium.content.app.sandboxed process service, and which runs the V8 JavaScript engine, it is isolated, and it has some different features, and there is actually very small, even no privileges, for this process. We believe that the isolated process was introduced in Android around 4.3, and in the official documentation, we have a word saying that if it is set to true, the service will run a special process that is isolated from the rest of the system and no permission of its own. And given, like, a PS output, and we can see that the Chrome is split, the Chrome app is split into three processes, and there are two... like a normal process, but there is one called sandboxed process, and we can see that it has an UID that is U0, I0, and which means isolated process zero, and it has an SNX contest that has isolated app label. So even you go back in the V8 JavaScript engine, and you only get a code equation in this isolated process, and you need to... you need to find some way to escape the sandbox, otherwise, you cannot read the victims' phones, the SMS, and you cannot read them because their neural process have no privilege. And you can use, like, curry exploit, like the ping-pong route, to break out the sandbox. And also, you can use a broker exploitation and also some other attack services in Android, like the binder and that we will introduce later. First, if you want to find a way to break out sandbox, sandboxes, you need to check the policy, and as we know it is named isolateapp.te, and under the internal policy in the Android open source project source root. And actually, to be honest, I think it's not as readable as the Apple ones. And you can see first you have like the domain type, that the domain is set to isolateapp for our isolated process, and then there are some allow. By default you are denied for all, but you have some allow things and you also have like a further policy to inherit from. And for some allow policies we can see that only two IPv6 services, or two system services you can access from the isolated process, which is the activity service and the display service. And you can see some IPv6. There are some unparalleled socket IO controls, but you cannot use like, because after ping pong root, Google introduced this new policy to deny you from exporting this raw socket box. And there are also some never allows. The never allows is actually not a statement in action, it's actually like a compiler directive, which means that if you accidentally enabled these things in your policy file, it's going to require the analytics to generate an error and they will refuse to compile it. So you can see that they have some most restrictive thing to deny the previous errors they have made, and we will see it later. And I know we know the graphic drivers have many bugs, but they explicitly specify that you cannot access graphic drivers from this isolated process. But you may think that... Okay. So you can see that there are two services, like activity service and display service you may access, but can you access all the interfaces of the two services? The answer is no. Because there is an additional check in these interfaces, they are like enforced not isolated caller and it will actually use the binders feature to get the caller who initialized this binder transaction. And they will retrieve the user ID that makes this call and check if you are isolated or you are labeled. I'm sorry, you cannot perform it. And also, they have like a father policy, like ABB domain, but it's not actually quite interest and we may look at it in the following slides. So let's turn to the... You have a brief overview of the how sandbox is implemented and where you should look into the policy files. And now let's see how to audit the sandbox profile to find the... The possible attack services. So how do we audit a sandbox profile? Well, the first thing to do obviously is to read the definition of the sandbox profile in order to find the best attack surfaces. So let's try to do this on the process, the web process of Safari browser. So as I show you... As I showed you before, there is a deny default clause, but shortly after, there is an import of system.sb. So we need to check that as well. And inside the system.sb, there is a nice surprise. As you can see, it's defined something called system graphics. So we need to check that also. So system graphic is defined here. And... It allows you to open... Open several Iokit user client, all related to graphics. So basically on OSX, the browser, the renderer process of the browser have unrestrained access to all kernel driver interfaces, which is very good because compared to Android, which you cannot access, instead here on OSX, we can access directly... The graphic drivers, which is a really good attack surface, and hopefully we can find some bugs. So let's pick this attack surface of graphics in kernel. So in our sandbox profile, this line is very interesting, which is allow Iokit open of IO accelerator. IO accelerator is the graphic driver interface into the kernel. And... And with this property in our sandbox profile, we can open more than 10 IO user client and speak to the kernel, hoping to trigger some bug. Because people like fancy graphics on their browsers, so Apple had to make decision to allow this graphics. Yeah, for performance, also. Yeah. So let's see an example of vulnerability. This vulnerability... We... We found it last year, and we wanted to use it at Pwn2Own, but unfortunately, for us, we got a bug collision, and this vulnerability was also found by Project Zero, which reported it before we could use it at Pwn2Own. So this bug was a race condition inside the external method of Apple Intel BW Graphics, which is a graphic driver of... Almost all the most recent MacBooks. So it affects every recent MacBook with this particular CPU family, which is the newest one available for Apple products. We found it by code auditing of this code reachable from Safari sandbox. It was patched in 10.11.4. And it was reliably exported. So it was a cool bug. But like we said, it got fixed before we could use it. And it's also funny because Apple fixed it wrongly, and Flanker reported this mistake, so now the bug is properly fixed. You know, previously, just, I believe, before the day, before yesterday, Apple got three bugs from everyone. And so, actually, they lose track of this. This report, although they have fixed it. And I write an email to them, and maybe after one month or two months, they send a CV, and say, oh, I'm sorry, we lose track of it. And I hope that after they pay bounties to everyone, they will become, they will have a more, like, a better attitude on this sort of thing. So how this bug works, very, very quickly. There is this Choo user client. Which... Which are related to OpenGL and OpenCL, which can be used from the sandbox. And the problem is that who wrote this driver didn't think about race condition problems. So we have some race condition problems inside the unmapped user memory. So as you can see here, there are some operations performed on an EG hash table inside this external method. But as you can see, the method on this EG hash table, they are called three different methods sequentially. But the lock is acquired only after. So what happens if two threads are racing inside this function? Of course, race condition is triggered. So how to trigger it? Well, it's actually very simple to just trigger this bug. Not so simple. It's very simple to exploit it, like we will see. In order to trigger it, you just have to open this user client and call one time or more than one map user memory to insert one element inside the hash table. And then you have to try to race with the two threads that call inside unmapped user memory. And then you have to repeat this map and unmapped race until you trigger the bug. At first, the bug will manifest itself like a double three. But as we will see after, we can turn it into something more useful. OK, now everyone gets a record of your data structure classes in the universities or high schools if you are a genius. And let's look at this linked list. And actually, we know that if you call the map memory, the memory, your input is maintained as a linked list. And the linked list is also linked on a hash table. Because you know that if you use a hash table, there are hash collision issues. And this data structure solves this problem by maintaining a list on the collected elements. So we actually have, like, IG elements linked here and from here. That error element has a previous pointer point to his, like, brother. And he also has a nest pointer point to his next brother. OK. And so the ideal situation that we originally imagined that if we raise the threads, and they will pass the hash table's contains check. And when one is retrieving a pointer from this IG element, the other will freeze it. And we can fill something in this free element to get the RP control, to get the instruction pointer control. But in reality, we do some testing, and we find that the two threads, they do pass this. They do pass the contains call. But the thread one is actually more faster, and maybe 10 meters per second faster than the second thread. And maybe there are some schedules, policy issues with Apple. And the thread one will remove this element before thread two get access to this element. And thread two will unfortunately hit, like, no pointer references. And we know that after Apple finally introduced the SMAP, and you actually cannot exploit this non-pointer references on their platforms. So actually, after some, I believe I rehearsed all my data structure classes, and I found that actually you can do a risk on the two adjacent elements. Because you know that if you remove an element from a linked list, the nest element to this one which was removed, it will link to the previous element, to the one that was removed. So we can see that in this list, we have element one, two, three. And if two is removed, one will be linked to three. And when three is removed, two will be linked to four. And if we erase these two removes at the same time, we may get one linked to three. But three is also removed. So we get, like, a free element that is a stable link connected on this linked list. And after we call, like, a linked traversal, and we will hit this use after free bug, and we turn this originally double free, like, style-like bug into one. And this is, like, it is much more stable. And I believe Yambir also found this bug, although he didn't figure out how to exploit it, at least on his blog post and project their issues. And nevertheless, Apple fixed it, and they add a lock in this remove function. But they do not add a lock in this add function. Maybe they think there are no issues here, but actually we would like to say no to this, because if you do not... Unlock the add function and also the unlock function, you can still, like, we can still raise, like, the add function and the remove function at the same time. We can actually get a heap pointer leak to get rid of, like, KSLR. For example, if we are inserting an element after this linked list, because you know that if you are inserting elements after this linked list, it is get appended to the tail of the current element. So we can see that it is... If you open the element... It will be appended to the element 3, like you have the element 3 has a nest pointer point to the element 4. But if you are in another thread and you raise to remove the element 3, actually you can get, like, a heap address in the free element, and you can use some sort of, like, a memory function technique to get out of this heap pointer address in the kernel context. So this is their partial fix, and we disclosed it to Apple, and Apple fixed it again, and we are happy with it. So we believe now there should be no problem. And if you are more interested in more details, you can check out the slides, and if you are interested. Okay. Now is enough time for the Apple stuff, and now look into the Android stuff. In the Android platform, that's the ISO application contest, it is inherited from the application policy, application.t policy. And the application.t policy. And the application.t policy. It is a domain policy. The domain policy has some more, like, complex rules, and, for example, that in all domains, you can allow process to fork to get some signals, to send signal, receive signal. But actually, you know that in iOS platform, container application is not allowed to fork. Although in Android, it is allowed to fork. And the sound policy. Sorry. like application.t policy, we have some ESAC map so that you can map a portion of memory and make it executable so that it can place a share code there after you get this instruction pointer control. But this is not, I believe this is going to be disallowed in this renderer process if they figure out some ways to make a JIT mitigation just like what Apple does the day before yesterday. So we now have an understanding of this sandbox policies and we have some options. First is the binder interface. We know that binder is the core of the Android inter-process communication and although the binder interfaces are strictly restricted in the Chrome sandbox, we still have some ways to figure out some bugs and to get some prosecution. For example, we have a bug that actually I believe was found last year by somebody that is sharing storage in Watson in the vector implementation of the Android Lib UTLs. And people who think that it is remote can be triggered in a media file parsing but actually it can be also triggered in sandboxes because we know that a muchas which is the basic translation is not asinated. in the binary interfaces at the Java level, you can specify a class name as a string in the Java code and when the Java object is deserialized, a class based on this string name is constructed and you actually have additional paths to trigger the deserialization and the serialization code in a system server context. If you call the system, if you call the activity manager services interface that is accessible from the Renderer sandboxes and you make this code, you make this deserialization code in a system server context and actually you can trigger the CVE 2015 and 3.8.75 in the system server context. And also, we believe that sometimes they forgot to do a more fine-grained lockdown on this binder, the Renderer interfaces. For example, in this technique from the code, we found that they accidentally placed some more service handles like the package service handle and the window service handle and alarm service handle in this binder placement responses. So although the Render has three handles when it's initialized. So actually it opens like three additional attack surfaces. Unfortunately, I believe the Google engineers are very clever and they figure it out very soon and they patch it already, unfortunately. So, after this binder stuff, we still have some more like Chrome IPC that you know that Marco has introduced in our previous slide that the render process have to ask the broker process which is a more privileged process to do something for it and they communicate it using the Chrome IPC. And the Chrome IPC is implemented in native code and we actually, there are some bugs in it. Also, in Android platform, unlike the Windows ones, the WebGL process is running the host process. It is not running a separate process in the Chrome browser, it is running the host process, that is the Chrome browser process itself. So if we have a bug in the WebGL, just like what Lockhart did this year and last year, we can get a code execution in the GL process and actually we get a code execution as a normal application in Android and you get much more context to play with. And finally, we also have some kernel stuff, just like what we did to the Safari, the network sandboxes. For example, the CVE-2015-1805 is a bug that is falling by us to gain a stable route, stable routing on Android and we try to port it to the network sandboxes and there are good news and bad news for us to exploit it in the network sandbox. The good news is that there is no pipe policy. You know that this bug is related to the pipe, our vector in the Linux kernel and there is no pipe policy in this isolated application because actually it's like a, nobody believes that this bug is voidable so there is no policy for this bug in the isolated application process and you cannot, and you cannot, you cannot ban this like very fundamental technique, fundamental technique that is used by almost every Linux process. But I have to admit that there are still bad news because our explosion technique used many message calls to prepare the kernel memory but unfortunately there is no sandbox for BDU to do so. So it forces us to find some new ways to exploit and I admit this makes our life harder. And also, you know that if you are clever, you cannot make some clever or some of your neighbors also clever because the Windows, they always tend to make mistakes. They are always in a hurry and they put buggy code here and there. And here is a bug in the Huawei phones like HW-EXT service that is running the system server contest and there is, you can see there is a very obvious integer overflow here and you can like get auto-bounce access, both read and write and whatever like. But you know that Google, of course, they are not aware of these kind of interfaces. But the Windows as in themselves and Windows do not modify their S Linux policy to adjust to this. So they are opening also new interfaces for attack. So let's go to the comparison part. So we analyzed the implementation of the sandbox on OS X and Android and now we want to do some comparison and wrap up. And after this short comparison, we will do a quick demo. Of our OS X code execution and remote kernel code execution. And after that, we will just do some very small conclusion and then we will conclude. So both platforms share a lot, actually. They both have a file-based sandbox profile definition, which you can audit. But except from that, we can feel that the Chromium sandbox on Android is stronger than in other platform because it offer very small attack surface. And also in Android, the sandbox is more layered. Like first, you have the Chrome S Linux, sorry, the isolated APP, S Linux policy. And then it's running as a, even if you escape this, the sandbox, Chrome is an application. So it's restricted by the DSC sandbox. So there are quite a lot of layer that you have to take into consideration. So let's go to demo. And after the demo, we'll go to the conclusion. And we are presenting two demos for this year's . And one, we exploited one in the OS X. And one, we exploited one in the OS X. So yeah, so the two demos are both two remote compromissions of OS X. I can't find my pointer. One is a Safari code execution bug, and then a sandbox escape in user space. And the second one is a Safari renderer bug, and then a sandbox escape to kernel. And in this demo, you will see that the victim, inside the virtual machine, will browse to a website. And then the attacker on the left will get a remote root shell. This 10.11.3 is what we exploited. It is the newest version at the time of the . So first of the other Safari renderer bug in the JavaScript or the GLC engine. But then the attacker will get an account, and then the user will be the first to know And then we are able to see that the attacker And later we will get a way that is fast and easy for the user to remember. So you can see on the left side the attacker get a remote root shell. Because we are exporting the Windows server, which is responsible for the graphics stuff on graphics, user-based graphics stuff in OS X, you can see the graphics has freezes in the victim machine, so we choose to get an out-connected root shell. So this is our, I believe this is our user-mode root exploit, and as I have said before, you have got a user-mode root, but you cannot make them make a sandwich for you. You still need a current exploit. So also credit to QB for making this video. There's some fancy music, credit to Hans Zimmer. For a later, uh, update, see what I've been doing all the way to here. root to spawn calculator, you say that as pseudo calculator, it will tell you, it will not tell you, no, I will spawn a calculator for you, it will say illegal instruction, but as kernel model, you can actually spawn using the root calculator. So if you see like a root UID calculator on your computer, oh, you are boomed. So last slide. So to draw conclusion, we believe that sandboxes are great security medications on the, I believe that all recent advanced operating system has sandboxes support, and they take some different approaches, but they have the same concept, and they make attackers to require some additional support. So, but as a determined attacker can still come from a system, as we can see in the previous demos. So with some credit, I'll go to my colleague, and other members of KeenLab. So if you have questions, you can contact us on Twitter, KeenLab, and also my Twitter and Marcos' Twitter. Or you can find us around if you have any questions. Thank you for your time. Thank you. Thank you.