>>Good morning and welcome to DEFCON Sunday! How are we doing? Yeah! >>(crowd cheers) >> That is a disturbing level of enthusiasm. Wow. Welcome to the most coveted speaking slot, first thing in the morning on Sunday. Um, funny story, a couple years ago, I spoke at Black Hat and I spoke at the same time Dan Kaminsky was giving his talk on DNS, and hardly anyone came to that. So here I am, and David was my speaker handler there so I'm returning the favor introducing him in a highly coveted speaking spot. It's an interesting talk. I'm excited to hear more about this stuff. Let's give Dave Mortman a big hand. >> Good morning, everyone. Thank you for coming out at this really stupidly early hour. I appreciate the effort. Let's talk about Docker and the whole security thing with regards to that. A little bit about me. So in my day job I'm a chief security architect for Dell software. In spite of that, they let me use a Mac for the most part, unless I go to customers in which cases I pull up the Windows thing. I do clouding stuff most of the stuff and I'm poking around a bit at Docker. It got a lot of publicity in the last year. Everyone is like Docker this and that. You can't go anywhere near a tech blog without someone talking about how awesome Docker is. What is the really big deal about Docker? In some sense it's not a big deal at all but a container but for those of us around for a while, remember the jails? I remember studying TARUT because that's more secure. The cool thing about containers is that standard containers or jails or LXC, which is the modern version of that stuff, and you wrap it with meta data. You give it context of what's inside the container. You say, hey, I'm containable, but I can tell the rest of the operating system what's inside it. Now I can say, hey, this is a format. We have taken a container and made it a package format. So now it's just like any other packaging format instead of a single executables with a list of dependencies you need to download yourself or rely on a favorite package of choice, all are self-contained in this little package. It's cool and effective from an operational perspective. Life gets easier especially when you look at things like, hey, I'm developing something, and I need to hand it off to QA, who then hands it off to some other security team for evaluation who hands it off to production. If you're lucky, that's the order it goes in. If you're not lucky, we get calls three weeks later. It's in production. Can you scan it? In theory that's how it works. The great thing means that what goes from devto production is the actual same exact code. You avoid things like it worked on my laptop, or, well, we thought you had this version of the library in production, and but in dev we're three versions later. It's convenient that way. From an operational perspective it's awesome. The problem is, of course, that everything has security issues in it because, you know, what doesn't? In the last year people have gone to lots of effort and say, oh, my God, containers don't contain. They're not secure. They're not like a VM, because VMs are secure. We know that, right? Containers, you know, in some absolute sense are not as secure as VM. They're much lighter weight in terms of security in terms of isolation, but they're pretty good, and the fact is for the most part they actually do contain. There's a couple of issues in a little bit about places where they don't do full containment. If you looked at what it was like 20 years with jails, you've significantly reduced the attack surface that someone can go after. Realistically if they escape the container, they're just running on bare metal. So it's not actually a huge loss in security that you get at that point. In particular, there were a few blog posts over the last year where people are like, trivial escaped from the container. I can do this. There was a beautiful one as a Docker user launch a container, create an SUID shell and copy it out of the container and get it en route on the SUS? Whoops. That's scary. I should validate that. I'm taking all the posts of container escapes that people have done in the last year, and Docker has fixed all of them mostly through changing the default configurations. Funny how that works. There's things like -- it's things like, you know what? Don't run Docker as root. Don't run Docker as root and do a few other basic sort of hygiene-type things. The equivalent of washing your hands. Putting away the trash. That's good. Escapes aren't trivial anymore, however, there's still a lot to do. Where are we today? What do we get from Docker? What does Docker give you or other container strength? They're all the same. There's app C and there's a clear Linux thing that's not quite a container, but they all have the same basic structure going on. They all have, you know, some sort of -- where is it? Some sort of basic container management to limit what you can do. I got ahead of myself here. They all have like, you know, C groups, and they all have mean spaces mostly. Most of the areas are name spaces. This is good. If you're in one work stack, you can't see another container's network stack. Generally a good idea. Excuse me. They all have things like IP tables. The file system is own name spaces and processes with their own name spaces. There's two key places that they're not yet name-spaced that are being fixed. There's not a user name space yet. This means that if you're operating as a user -- a particular user in a container and you scan the container somehow, you operate as that same user outside the container. Not so good. They're fixing that, and the next release of Docker we talk more about that later. It's implants name spaces in the underlying structure, so pretty soon we have user name spaces. Another big issue, however, is that the kernel key space, that place with cryptO secrets is not name spaced at all. As the host LS if you put a critical kernel key space, any of the containers can see that. Not so good. Also, if you have local containers running, any one container happens to put something to the kernel key space, all the containers are see it. So if you need to use containers, you really care about what you're putting -- what kind of key management and secrets you're dealing with, and similarly be careful about user space stuff. This is why the state-of-the-art is to run one container per VM or one container per bare metal. You get a lot of benefits of containers with production without running risks around the key space situation. That's a useful thing to consider. The key space stuff is addressed by running SC Linux, and if you run SC Linux -- keep your hands up. Keep your hands up if using SC Linux, the first thing you do is turn it off when you get your operating system up. Exactly. SC Len nix is a really cool tool for most of us if we're not Dan Walsh we don't use it to the full extent of its capabilities. This is one of the pain points still in containers. Running SC Linux by default solves this particular key space issue is my understanding, but to really get the benefit out of SC Linux takes a lot of time and effort. All right. So there's dedicated network stacks as I mentioned. When Docker first came out, there was no signatures, there was no one validating the container you were downloading from a registry was actually the container you thought you were getting. Nothing. That's crazy kind of. It's not comforting at all. It's terrible. In dock to her 1.3, maybe 1.4, they started to offer selling to manifests for official Docker containers. So if you were going to download a container from Docker, Docker.org that has the official Docker stamp of approval on it, the manifest would describe the container signature on it. That's a good step forward, except for the part where the container itself isn't signed so there's no way to actually validate what's in the manifest is in the container. Boy howdy is that manifest signed. I was like, okay. So what does this get me? It gets me a validated manifest, so I feel comfortable. Okay, I don't. But they're fixing that. I'll talk about that later as well. That's kind of cool. What Docker has done, the folks have hired new people in the last six months to a year to work on security Docker. I have spoken with them several times now, and they basically released Docker knowing there were security issues. They're like, this is data code. We have a road map for fixing the security issues, and every single release adds extra functionality on the front. We're getting better. That's what we want to be. Definitely not the opposite trend or direction. They recently released a high level paper on securing Docker. I'll be posting the new version of the slides online and a whole section with all the links to it over the course of the talk. So you have a great high level paper on how Docker works and containers work in general and security things to do. They recently released with CIS a document on how to harden Docker. It's 190 pages. So I had a lot of spare time apparently and read it all. I've put up some highlights for you so you don't need to read it all. It is worth going through. One of things you're going to find here is as you go to lockdown Docker, this is listed and sounds familiar with locking down anything else, really. There's a few special things around Docker, but realistically speaking it's an application and has special corner cases, but in the end there's a lot to do just like anything else. So they recommend let's restrict network traffic between containers. If you run multiple containers on your host there, don't allow Docker or the containers to talk through internal buses, through internal operating system guts. Make it go over the network. Excuse me. That's a great thing. Make sure everything goes across the network because you maintained that network name space and the integrity of those separate network stacks. As soon as the containers communicate through the host US, you lose protection. Always, always, always make containers talk across the network. Generally just using loop-back anyway, but it goes out through stack and back through and IP tables and things also take effect. Here's a clever one. Turn on audit D. For all of the Docker files in the network itself, and here's the radical part. You actually have to read the logs. I know, I know. We don't generally do that in this industry. We just collect the logs or spray them to DEF install, but it's 10:00 a.m. on Sunday. Most of us are somewhat hung over. Please review the logs. It will make your auditors happy at least, and they'll be nicer to you. That's worth something like right there, I think. This is a good default. Don't turn to SSLLS when connecting to Docker registry. I think we all know this. Don't turn it off anyway. In fact, don't let the Docker itself listen on the network. If you're in production you may not be able to avoid that, but if you doing it local that's not a connection. The Docker client is right there on the machine anyway. Don't have the Docker delistening on the network. That gives you a lot of protection he is suspiciously because the Docker API has no authentication. It has no concept of identity yet or roles. It's just local. They're saying, here's me. There's me. So, please, don't go out of network. After you go to the network, enable circuit-based on top of that, sin generics or something like that. You get the comfort level that only the people you know are using that. There's no authentication, proxy something on top of it. Give yourself some safety if you put it on the network. This happens in any larger environment or if you're doing some sort of orchestration using third-party tools or something. You have to put it on the network, which sucks, but give yourself some protection. Another radical idea. Lock down all the config files to root only. They aren't containing critical information, so they shouldn't be writable by anyone else but can be readable by the public. If you use search keys make sure they're owned by root 400. This is more or less obvious, but I've seen multiple test installers of Docker where they put a cert there, so check those things. This is not rocket science. That's a different talk. Don't run your containers as root. Run them as non-root users which gives you some protection like with Apache and Tomcat and my SQL. This is a weird thing, I know. Don't download the Internet and click on it, right, people? Come on. That was funny. Okay. I'll get more for that later. This is generally a problem space. It's not an operational issue, but I'll get to that later on. Minimize your package installs. Basic systems admin 101. Don't install shit you don't need in your container. One parent process, keep it simple. Containers are fast to spin off. They're increasingly SOA and things like that. If you have a container, just have one app running inside that thing. If it's a microservice, fine. You don't need to build your entire application stack top-to-bottom in one container. It's tempting, it's all one cute package. It's not harder to spin it with three separate containers and keep the communications more secure. It's easier to audit that package, and it's much easier to avoid dependency conflicts and issues with things brought in by third-party libraries I'll get into later. Take advantage of -- Linux has this concept of kernel capabilities. Take advantage of those. Restrict that container to have the capabilities at the kernel level that it needs. That's the benchmark, the CIS benchmark has a great list of what all those capabilities are, and by default -- the default for them -- these are pretty good. If you start to get some weird, raw packet stuff, you may need to adjust that a bit. Defaults are good. This is one where I'll say trust the default, but be aware of certain things. It does funky things in general, so that might break working on fixing that with the capabilities thing as well. The size okay for everyone? Okay. Generally speaking when you talk -- the default capabilities are general admin, sys module, that does it. That's all you need. Don't use privileged containers. A privileged container has root level access, lets you do root level functionality. Generally speaking, if you're doing root level privilege containers, you're actively abusing the point of containers and negating them as well. That's not so useful. Avoid privileged containers unless you really, really can't. Another rocket science item. Don't mount sensitive host file systems and directories, et cetera, in your containers. Your container doesn't need it mounted. I know. It doesn't need it. It really doesn't need proc. So don't mount that shit. This was one that surprised me. Don't put SSH into your containers. You don't need it. If you need to access a container, log into the operating system, use NS enter, which basically gives you the ability to jump into your container, but generally speaking SSH is hard to secure. It's hard to manage. It's kind of funky. It does interesting, bizarre things with the stack and greatly expands the capabilities to make it work properly. So avoid SSH if at all possible. It adds complexity you really don't need. So avoid that if at all possible. Also, if at all possibility don't use privileged ports. If you're running Apache or similar applications for 1024, you can't avoid that. Generally speaking anything other than this front-facing services, don't run them on privileged ports. Anything that has to run on privileged port needs greater access to the kernel and adds to the tech surface. Your mid unit on a privileged port, your database, don't run it on a privileged port container. Don't do that shit. Again, set reasonable limits for usage. Anybody had the pleasure of configuring Java on the VM, all that shit? Yes. I was doing this. Yes. You're going to have that same joy as you start running containers, but this is a good idea, particularly if you go to a production environment. Set those maximums and give yourself some protection from DOS attacks or run-away processes. There's no reason the container needs all the memory on a box. You want to run bare metal anyway and possibly not even in a VM. Containers, that's not your best suit. So set the CPU priority. Again, this new you won't have a container go awry and kill your entire machine. Not crazy rocket science stuff. Set reasonable limits. Does anyone actually like U limits? They are generally speaking the thing that distances with admin running databases with the C world variety, you up new limits constantly. Pay attention to those. That's a great way to protect yourself. Make sure you have a reasonable limit on the U limit, and at least that way you see what happens. You see that pain and suffering coming, and it prevents that container from getting out of control. For the most part containers, there's no reason not to mount your file system as anything other than read only. There's really no reason to ever mount your root file system as read/write in a container especially. If you need to make changes to your container where you actually take a copy of that container offline and make the changes you want and generate a new container image and then launch it. I'll talk more about configuration management later and the ways in which containers change configuration management in terms of from a security perspective. Only bind your containers to the appropriate network interfaces. Don't go the default of having them bind to every network interface on the box. For the most part, most of your containers can be hooked to that. There's no reason they could ever be exposed to a network interface on the -- unless you have boxes running, but there's no reason for containers to withstand anything other than loop back. This is an exciting one, which is limit your audit -- containers automatically restart and die. That's a cool feature. You want to limit that to three or five or a few more than that. The last thing you want is your container constantly restarting and hosing your box. It's just generally good practice in dev environments. Instead of having a DOS attack take down your box, have it force a constant reboot cycle and take down your box. That's just as painful. Don't share name spaces. The default spaces are separated between the hosts and containers and devices. If you share name spaces, you destroyed the point of name spaces. Don't share. This is a default that are not shared. Some people think it's easier to share the name space, but you might as well run one container or not use containers at all. You just shot yourself in the foot in that case. Back up. I know, I know. Back up your shit. It's kind of fun going, no, man, don't do that. Get logs. I know, logs. Logging is a little bit tricky still, and the last release added sys log hooks finally and that makes it easier. Every SIM has a unique tutorial how to enable Docker containers and logging into their product sites, so that's easy. It's not ideal yet but it's tricky. There's direction. It's definitely -- it's like programming and cut and paste and you're probably in good shape in that regard. Work with a minimal number of images. Anyone remember when we first started to do VMs and people generated a VM for every single application they had as opposed to have three or four VMs and add the applications on using shepherd Put pet. Don't get in the same situation with Docker. That's a huge problem already for folks with maintenance with windows and things like that. You can always add things on if you need to. Every time you add an image, it gets not quite harder especially above 12 or 13. People are really bad at managing large numbers like 12 or 13. A minimal number of containers per host. I recommend anything production recommended for one container and it keeps it simple. If someone escapes it's not the end of the world. If I run multiple containers, you do like services. I have a big honking box. Rather than running 12 VMs on the box, I run 12 containers running the web server or something like that. Still, make sure you have diversity across boxes like VMs. If you lose your hardware, you're not down just like anywhere else. I talked a bit about trusted containers. You want to know that the container you're using is the container you think it is. So this becomes a supply chain problem. How do you know that you actually have what you think you have? As I said earlier, right now Docker published images have manifests associated with each image, and that's signed. That's a start. It's not ideal because, like I said, it isn't signed so you don't know if that container itself -- what's in the container is actually in the manifest. They're fixing that in the 1.8 release, which is due out any second now. It's already been released. I've been off the Internet mostly this week because there was a security conference or two going on. My laptop has not been on the wireless here. I didn't want to be on the wireless chute here for some strange reason. You want to watch the supply chain, and you want to validate your containers are what they think they are. Given the current state, don't use public repositories. September a private repositories and validate the images and with TLS only and then just continue to sort of double-check things. The server kind of keys them in an audit monitor and have appropriate protections in place to make sure those containers have not been violated in any way. There was a recent blog post about last month, maybe six weeks ago where someone -- 30% of the images of the public Docker repository are insecure. This proves that Docker is insecure. It seems like a really big number. I bet it's kind of on the small side. I did research and poked around. Some other folks did deeper analysis and they meant that 30% of the containers that they found have -- had a library or an application that was vulnerable to some ( inaudible ). Yeah, and? So you download it. If you use a container, you do what you do with every container, which is you run upgrade after the download the container to make sure you run the latest versions in the code and move on. Just because it has a vulnerability in it isn't the end of the world. You can't assume your container is up to date, which is why I said earlier patch. You have to actually pay attention to the stuff and patch your containers that keep them up-to-date just like anything else. We want to do something radical. This was not recommended by the CIS benchmark, but don't use chef or puppet with your containers. Don't use any online configuration management with containers. I've got quizzical looks in the front row. The reason I'm telling you this is Docker are the ideal candidates for mutable servers. The problem with using management is you get -- or the reason configuration management was invented was the concept of configuration drift, and a configuration drift is you have the drawing on the shelf or Excel spreadsheet saying this is the configuration of the web server. Over time you make changes but it's not copied to the spreadsheet or printed out and put in the binder for disaster recovery. Three years later when you have an issue, no one knows what the configuration looks like. So Chef and Puppet were invented. Not only does is it automate everything, but you have basically a CMVB. What cheffer puppet thinks is the configuration is the configuration. In fact, if you run these tools and the configuration -- someone changes the configuration on the box, chef and puppet have a tripwire type thing and say huh-uh. Any changes that happen outside that space get pushed back. Chef and Puppet are kind of heavy clients. In the container world you want to run one process, and that's not going to be Chef. It's pointless to have a container running Chef or Puppet, right? You're not doing anything. Your configuration is good. Instead, because containers are so fast to spin up, we're talking milliseconds in one case. You create a new container and spread that up and shut down the old container. If there's an issue, does it work, you shut it down and bring it back up. Classic AB things. You might use your load balancer and shift it over to the new containers, but any change you make generates a new container. So this is what Netflix does. They don't actually make configuration changes on Amazon. They burn it and spin up a you whole new instances, hundreds of instances and transition to little balancers. Facebook does similar things like Amazon. The great thing here is you have a history of what everything looks like, and you're not worried about it failing and you keep your container nice and tight and clean. Related to this issue, of course, in terms of just trusted containers is that because you only had a signature on the manifest is how do you actually have any attribution to the life span of that container? There's interesting stuff coming. There's other things we can do beyond these basics, which is you can run App Armor and SC Linux. The cool thing is if you run SC Linux, once you get the configuration, it lives with the container. You don't need to track that separately. You figure out what your ideal configuration looks like. It's built into the container. As you transition across your infrastructure, it goes with it. At least again that's still consistent. There's a cool tool called Set Comp. It's on a case-by-case basis. You can get really tight control over what the system calls are doing in back to the kernel and operating system in general. That's pretty cool and this is a cool tool called Docker bench security. I'll have the links to that along with the other stuff when I get the slides redone. What Docker Bench Security does is goes through your container and validates or alerts you to configuration and settings for your Docker container. Most of these recommendations I made are built -- checks for them are built into Docker Bench Security and download and make sewer in a good shape. It's a complex thing to do and every time you do something so that's a win right there. There's also two third-party things to do to lock down your containers more. The folks at economical released a project called LXD, Lima, x-ray delta. That is a container hypervisor. They're building the container version of VM hypervisor. There's a thing layer like you have with a traditional hypervisor, so that's out. That's continuing to mature, and that looks promising and the commercial package. They do policy and security of containers in general and they do VMs and containers originally built with platforms with a service many mind but as container they take off in general. It's a policy-based link that set what containers can do. This is looking promises. I haven't done a deep dive on this. Derek who is the primary author of cloud foundry, and he's involved in the cloud and virtual computing space forever. It looks very promising. That's another one to check out. Maybe you have free accessibility of things. There's cool stuff coming from Docker. Docker is not sitting on their laurels going it's good enough. They're continuing to add security at Docker.com several months ago they announced a new product called Notary. It's a secure package management system based on the update framework, and Notary coming out in 1.8 out any day now and may be out, in fact, is using the ability not only to have signed manifests from Docker, it lets anyone have signed and more importantly it's a content -- it's a notary, which is part of the V2 registry, is a content-addressable registry, which means that now your manifests contain a list of hashes of all of your -- of all your contents of your container. So you don't need to sign a container or encrypt your container, though you could do that if you wanted to. So now when you get the manifest, you validate the signature on the manifest, and now you list hashes of all the contents of the container. Now you could validate what's in the container is what you think it is, and it's not restricted to official Docker containers at this point. You can do this yourself in your private registry. You can have much more confidence that the container you downloaded last week and validated is acceptable for your standards is still the same container. The update framework is cool is because not only does it address the file system, but it has a concept of freshness. So what this means is -- it's the signatures are unique enough that what happens is that when you go to registry and say, hey, I want this container and the registry says go to this mural over here in the western U.S. other this one in England or Ireland, it validates against the map -- your client looks at what's on the mirror and looks on the master and validates it's the same thing and you get the most recent version. Obviously, things get out of sync, especially with recent updates. Now you know not only you get what's in it, but it's what the manifest says in the container but you have a version that you want. If you want a new version, it's the right version of the older one as well. That's pretty cool. It also has the concept of snapshots again. You can version your container, so it makes it easier to roll back to a different version forward. It's designed for survival to key compromise. If any gets lost the spec is designed to allow for survival compromise, so that's kind of cool. The folks at Docker had this audited by a well-known security firm. They will announce it -- when they release it, they won't say who did the work, but I know who they are, and it's very talented folks you've heard of. That looks promising as well. They're doing all the right things in terms of adding security net space. They're adding user name spaces finally. Once they add user name spaces, that means you can have a user but the general operator thinks is a general user. This is in 1.8. That's the underlying infrastructure that makes it work at all. It will bubble up all the way through cocker in the next release or two. There are a few places that still need some help. I already talked about the ring isn't name spaced. That's the whole problem with encrypting that and other people see it. They solve that sort of kind of, but it's not ideal yet. In terms of managing secrets, there's two open source projects. Vault from Hashi Corp, and key vault to manage keys in the container environment. Check those out. As I mentioned, the API for Docker has notice consent of authentication or authorization at this point. They're working on that, but be aware if you use the API locally or on network again, the proxy is in front of it so you can get that certificate based off anything. I mentioned set comp, SC Linux app armor. The big fans say, oh, it's easy. No, it's not. At this point it's my opinion that set comp is really ( inaudible ) for the most part. The tools for managing them are not there. This is actually my biggest nervous point about Docker. The tools you need to use to make these things much safer are really hard to use and they're hard to use at scale, which means a lot of containers are in a less than ideal state because of that. I try to run something with less Linux on, and then the solution to fixing it is to turn SC Linux off, because that's the fastest route to doing it. I'm one of those people. Logging is getting better but they still need help. Orchestration again. If you do anything at scale, you have nicos or both, and they're for the 1% still early on if you're not Google or handful of others, you're not using them yet, and they're hard to use. That's what's left at this point. Then like I said, I'll post the resources -- I'll send the latest slides with all the resources, because you don't want to take screenshots of this and everything. Just to finish up, it's not as bad as it used to be. A year ago it was horrible and six months ago it wasn't so bad. We're at a place where Docker is user, and if you're at that far right end of the curve, it's really useable. It's relatively safe to use at this point, and again, if you go in production, please one container per VM at this point. And that's any story for the day. I have just a minute or two for questions, if there's any questions. Otherwise I'll give you five minutes back. ( Inaudible question ) >> What about Docker U.S.? I'm not familiar with the second one. I have a deep dive into the security of those in particular. I assume at this point they have the same general issues to deal with at that point. You know, containers are containers regardless of OS at this point still. >> Last month Docker was boarded to previous state. They were limiting in a jail on set FS. That combination itself should make for interesting security issues. >> Definitely. >> Fixing rather than issues. >> I want to make sure everyone heard that. Our audience member said that last Docker was comported to freebie SD. They're working with jail. I agree I wasn't aware of that. That sounds cool, though. I definitely have to check that out. Thank you very much, everyone. ( Applause )