Today we're going to talk to you about how to secure your infrastructure service environment in one minute. Obviously our talk is not one minute, but we'll try to secure in one minute. Obviously secure already. So short introduction before we get into the details, my name is Nir. And I'm a public speaker as well as managing the security for the retail division within NCR. Here I'm speaking about something that is my passion. And just one thing you should know about me, I like sport but just not sweating sport. I'll let Moshe introduce himself. >> Thank you for coming. My name is Moshe, and actually I don't like sports, but it's pretty much the same as not liking -- as liking not sweating sports. I've been working with -- I've been working with the innovation —- in Israel for the last couple of years working with startups, we have a lot of startups in this neck of the woods, as you probably know. And I've been examining their challenges in the last couple of years how they adapt cloud, how they handle cloud security and this is where this stuff is coming from. From our experience, regarding the new reduction of cloud services. Cloud and cloud securities, such large words, I try to emphasize exactly what we mean. I try to focus the talk. First of all, we're going to talk about IaaS, infrastructures of service. Which I’m pretty sure you are familiar with this term. Infrastructures of service, Amazon web service, AWS, Google Compute and Azure backspace. Those are basically the providers that we talk about. Inside infrastructures of service, there's a layer, it's a relatively new layer, introduced in the last, I don't know, five to six years. It's the orchestration layer. It's basically the layer that enables the automation, allocating the resources between the different cloud services. It's the layer that will spin up your virtual machine, connect the instances, connect a IP addresses, storage, basically a very important layer, and it's also the one that needs to be addressed when we talk about security. So if we want to focus what we're talking about, we're talking about how to use orchestration in order to increase the security in IS environments. And why do we need this talk? What has changed? What is basically the attack vectors we're talking about. What you see here is I basically attack vector -- basically what you see attack vectors unique to cloud or most amplified by the different cloud characteristics. We not going to talk about all of them. We're going to focus on three. I’m going to give you a quick briefing, for the background, we have the provider administration, again, someone is managing our data. We have the management console, which basically allows you to access the infrastructures of service and it's a very wide dashboard. You can do so many things with it. You can access almost every aspect of your organization. Very scary, attack vector. We got multi tenant infrastructure right, everything on infrastructure the service runs on multi tenant environment using visualized software and hardware, so basically it's also an attack vector. Not going to talk about this one, there’s a lot of talk about security, side channels attack and stuff like this, different talk. Automation, API. Everything is the cloud is API based everything you do if dashboard you can do out of the dashboard, and this is what usually most cloud programmers do and also about automation, right? You move to the cloud in order to automate stuff, also an attack vector we're going to talk about. You buy software from a software service vendor he builds the software on top of platform of service or infrastructure of service, so you have to secure the entire thing but not going to talk about this one either. Side channel attacks, again, things come from virtual environment. Insecure instances, in the cloud, it is very easy to launch instances, spin up instances. Sometimes it's so easy we launch them and we forget about them. We forget to hide them, we forget to do all those important things we used to do in the traditional environment. And of course, this was another thing we are going to talk about. So we're going to focus on those three attack vectors, the management, insecure instances the automation and API. Let's take a look how those attack vectors are being used in the real world attacks. Let's take the story of browser, for instance. A couple of months ago, the browser stack company, it's a software service company installed on top of Amazon web service, which is basically the IS provider for them. They were hacked, and this is what happens this is how it went. An attacker found his way in through an insecure instance, basically an instance they spin up couple years ago, forgot about, it was standing there, with a shell shock vulnerability. This is the exit sorry, this is the starting point. Managed to go in. Found an API access key. API access key is basically, if you give somebody an anti-access key, about the same thing. Found an access key, managed to whitelist, sorry, he managed to spin off new instance and white list this instance in the fire walls. Once he has an instance running with firewalls open, he attached backup disk. Inside the backup disk we found a database connection string and basically from there on, it's very simple how you move on to the organization data. So again, attack vectors, insecure instances, forgot to lock them down. Automation, API. All of those tool stuff you can do like connect, backup disk. Whitelist an IP address in the firewalls. Automation, APIs and of course, the wide dashboard that allows you to do so many things, like connecting backup disks, using -- changing firewalls for instance, from the same dashboard. So those are the new attack vectors that we want to cope with. And I say that we don't good enough tools to do so. We have simply don’t, we haven't adopted the security to be -- to be good enough environments. I say infrastructure and also new software development methodologies that are coming. A lot of software development and also infrastructure services have been changed because of infrastructure service. We now have other scaling. Once your server hits 80% CPU load, it will replicate itself, automatically. We have entire environment that are spinning up, processing, like 200 servers spinning up at once, processing and terminating after ten, 20 minutes or even one hour. It's not something we use traditional network, so accelerated lifecycle. We can see a lot of environments that are, instead of upgrading them to new versions, they are simply being terminated -- whoop, sorry, and being launched with new instances, the new software. And what is -- and one last thing, the infrastructure, the way you charge the infrastructure is changing, because the provider, started billing secretary every one hour, they’re reducing it. So you can bigger servers for one minute or ten minutes, so it gives the organization more incentive to lower the time that the servers are up, and why is that a problem? Because so many of our corrective controls and security are based on maintenance windows, right? We do -- management, vulnerability standing, penetration testing, all of this is done in periodic maintenance testing, right? Sometimes two a day, sometimes it's once a week, sometimes it's once in a month, sometimes it's never. But you have a maintenance cycle, and how can you do maintenance windows if your server only alive for one hour or two or three hours? Security is not adapted to those environments. And what happens? Companies are moving to the cloud, because security -- sorry, companies are moving to the cloud because infrastructure will not slow them down. What happens next? Security slows them down. You know what happens if security slows you down. Companies will simply give up on it. This is the problem we think to solve, asking the security community to attack new methodologies, it's not even about tools, it's a new way of thinking. Who do we automate securities in that charge. Developers learn how to automate software deployment, software testing, yes — way behind. So we started Cloudefigo. Cloudefigo is an open source, you can download from link -- the link is here. It's on GitHub. Everybody can download and take a look at it. By the way, it's based on the work from Rich Mater.. I don't know if he's in the audience, but if he is, the entire credit goes to him, and so Cloudefigo is a tool about automating the processes that were mentioned. >> We understood importance of creating a tool, so we decided to invest in it, so basically, we're investor in that, invested the whole $5 in the logo. >> I hope you appreciate it. [ Laughter ] Okay. So this the tool that we started. It's called Cloudefigo, and in the end we give details about everybody who wants to contribute but first of all, let's talk about, what does it do? So it's basically automates instance lifecycle, instance operations in the cloud. I mean, we talking about how to launch servers, low security configuration, encrypt, scan for vulnerabilities, right, all of those stuff which require -- usually require maintenance windows and then move into production. What we going to do next is basically show you a couple of those steps and what we do in those steps, but first of all, let's talk about the components that we used in order to do cycle. You can change any of those components, components that you use, simply chose those, usually because either they're Free Open Source or because our environment was Amazon Web Services but you can definitely with little changes migrate it to other vendors. So what is the lifecycle? What is this accelerated -- what is the component inside this accelerated lifecycle? We as object storage, in this case, Amazon S3, you can any other object storage coming from other vendors. We use vulnerability scanning, in order to make sure that the instance is ready to go to production. In this case, we use NAS, you can even use audix, cameras, today there are… IWS aware connect, connecting with APRs can give you some benefits. We use cloud init. Cloud Init can give you the perfect tool for automation, if you're not familiar, invest five minutes to read. Allows you to run scripts, a route permissions when the server is launching, so it's a great place to put in your -- basically granular adopted scripts for your environment. For configuration management, we are loading Chef, you can use either -- you can use any other software configuration management. We just use chef, for our purposes it's very convenient and free. We use … basically Amazon name for permissions for servers, you give permission to different servers because servers interact with the console, right? Usually talk about permissions and roles, we talk about users to dashboard. Amazon and — can give permissions to instances to servers. What they can do with their Amazon APIs, what they can do inside Amazon environment. A lot of it -- research we did, a lot of the development we did was in order to make sure that we have the right configuration and basically access controls and we'll elaborate a little bit more later. And we do volume encryption. Cloud goes into encryption by default, right? The only question is, how you do the encryption, we demonstrate a way how to automate creation and also giving the keys, I’m not talking about -- basically what I'm talking about is volume encryption, not encryption in the database, I'm talking about encryption, encrypting basically volume. What will be in the volume? Usually you install your database into that encrypted volume. Nothing -- that can happen to you, that somebody can get ahold, not be able to snapshot, right? You will not be able to use the data inside of it. So this is the lifecycle that we're talking about. We are talking about how to launch an instance, updating it, controlling it, scanning it, moving it to production, and then terminate it. Basically the lifecycle of every average cloud service. What we do now is basically offer you those steps, those phases, and then give you a quick demo on each one of them. So when you launch an instance, every machine handles its own encryption keys. It started by a remediation, when machine is launched, it started in remediation group. Only when it's ready will it be moved to production group. Basically, it's a methodology we know from network access controls, right? Network access controls prevents workstations from connecting to your corporate line, right? How they do it, make sure you're okay, and when you're okay, they move you to production line, or users line. Management of those attributes usually require permissions. So we start and usually those permissions are higher when you're talking about the launching phase, and production, you want as little permission as possible. Create something that is called -- experiment a little bit what you did over there? >> Yeah, so basically, since we wrote the API codes, to Amazon, we know API codes we have in the code, so therefore we created a list of the -- of the permissions that we need during launching instance but actually, we created the concept, new concept that at least we call it, so basically, when launch -- when launch an instance, we can assign only one role, or very specific role to this instance on Amazon and we won't be able to change the role. That's how Amazon works, so that's the reason why we decided to just edit the role when moving to production. On production, we won't need permission such as, I don't know, instance from remediation to production, or put encryption key on the storage, won't need it. And that's the reason why we -- we're introducing the permissions later, I'll demonstrate it. >> This is how the role looks like at the launch phase. A lot of different IPUs, it gives a lot of wide permission. Later on, when the server is moving through production, it is much reduced. And launch, again, cloud in it, as I said, great thing to automate. When the instance is launched, we simply inject all the scripts we want into the launching phase. Sometimes people ask you why we didn’t try to find image. Try to find image which basically it’s going to contradict the idea, the idea of automation because each time there’s a new patch, you need to prepare a new image. We prefer to use latest images and do the initialization script when the server is launching. >> This is -- at this point, show how it works. >> So before I'm starting -- well, you know that all people at DEFCON, not connecting to the Wi-Fi. So all the AT & T, Verizon networks basically flooded, so we hope our online demo will work, if not we’ll have to backup. But that's fine. So as for example, basically I want to start with explaining what we have with a Cloudefigo tool. Wow, that's big. So basically we developed the tool named Python, since we want to have API codes, basically it’s exposed by — and now we just starting server. On this server, we have our own API code to let's say to just launch an instance. So we'll just go and launch an instance. So when launching this instance, it takes time. We'll see, actually creating a new role, Amazon IAM, but then we need to launch an instance with this role. Amazon has a pretty wide infrastructure takes time to synchronize between the IAM role that just created a moment ago with instance tried to create -- so basically that's the reason why we have time-outs, pretty common when you get into developments with Amazon, you’ll see that there are few timers here and there just to make sure that works. And eventually we see that we have here 200, so we should be good to go. So now I connected to the Amazon Web Services. Okay. And we can see that we have here a new instance called secured instance. This is the instance that eventually going to start, and be the secure instance down the road, so when now starting the whole process we can see that when start have remediation service group, which is where starting, and also have role that we created. It's a pretty long name here. So I'll just click it. And I want to see the list of what I am allowed access, so basically go to the policy, you can see that it's a pretty long list of what I'm allowed to do, but later we will reduce it. So I'll let Moshe continue with our explanation. >> By the way, we’re sorry that the screen resolution is low, so you don't see the entire screens but those of you familiar with Amazon probably understand where we are. Those of you who are not, we're looking at the instances, launch and IM roles screen, two Amazon but two different models. Okay. Moving on. The instance is launched. As we speak, it's basically initializing itself. What happens next, update OS. Again, we don't have instances to do management that we need to do, we do it on spot. We do, upgrade to our packages, basically through that cloud in its script we install other, everything we need for this -- for the cloud to move on. Want to explain what kind of things you have over there? >> Yeah, so again, since we are using python, we have basic packages, we have python wheel, basically want to have -- pretty quick installations, Amazon SDK, already mentioned components we have, and management component. And also have our scripts on S3 that only all of these instances allow to access through the scripts, and download them, because we may have some configurations that may be secret or you can define what you want to have there. So that’s the reason we also remediated the access, the access control. Okay. So -- oh, you jumped. Okay. Okay. The next phase, phase nothing to show in demo, right, so skip, install things, and upgrade, no point showing you those packages are installed. The next phase is the phase that I take this new instance, and I harness it, I put it under my control system. Again, usually when it happens in the real world and not -- network, the IT guys finish to install servers, move to security guys, wait a couple of weeks, the security guys do the hardening, configuration, install the antivirus software, all that stuff is really slowing down the progress. We want to do them really fast, including all those tasks security guys need to do. What we did, basically at this point, we -- the cloud init installs chef client, chef again configuration management shows you what to do. Build a receipt, a recipe, sorry, a recipe basically a list of packages and commands that you want to run. We attach the recipe to the servers, and then downloads everything we need from the secure point of view. There's a lot of chef recipes in GitHub or everywhere else. You can use them to automate almost every aspect of your operations. So once the client is registered, the policy is downloading, and then we -- what we do is generate an encryption keys. As I said, the goal of encryption is basically to protect the disk. Inside the disk, probably, you'll have your application files or database, doesn't really matter. We use DEM crypt, basically utility for Linux, very common, you can use any other utilities out there for Windows or Linux, but the idea here is where you store the keys. Usually when you are working for infrastructure to service, you have a couple of options. You can save it with the cloud provider, right? Some of the cloud provider will even give you places that you can store keys. It's okay, but it's still vulnerable to some attacks like malicious insider inside the cloud provider, or basically subpoena from government or other court orders for stuff like this. It basically good enough for configuration, organization might not be good enough for other organizations. You can save it on premise, right? You control the keys. Then you can control who has access to it. You put it on -- but then you have to think how you transferred the keys in and out of the cloud. Therefore you expose it again, every -- every method has pros and cons, right? So -- probably if you're a bank, you want to keep the encryption keys in your hands, and transfer it somehow to the cloud, transfer temporary key to the cloud. You can also use a third party. Third party can be a key escort service. Today we have companies that provide you HSM escort service to put the keys. This way it can protect you also from government, at some point, yeah, because government work has to come to both cloud providers, which complicates it. But again, you need to move the keys between the different providers so again, every method has its pros and cons. It's basically defended on your threat analysis. What we did here, in Cloudefigo, we built a system that allows to be very flexible. In what we show here, we keep the key in a special place inside S3, object storage. You can very easily, if you're working Amazon, you can very easily migrate the application to keep the keys in a different cloud provider, like Google compute engine and do third parties or in your premises. In order to make sure that the keys are, I would say good enough security with them, and it's not bulletproof, but quite protected, basically a system, Nir will explain, how works those keys on object storage. >> So I'll just translate good enough. Good enough equals annoying. Okay? So want to make it annoying enough for anyone to access the keys. So basically, we put our keys on S3 on the object storage on Amazon. What actually doing, we’re launching the instance, generating a key, and we're not storing the key anywhere on the instance. Basically creating a key, NID, and using the ID, which is basically combination, with a few perimeters we get from the instance, like NET address, instance ID and a few things that are not changed on the instance we create actually a bucket with this name, so it's a hash. It’s pretty hard to guess that but you can reverse the code. Again, it's annoying, but you can do that, but eventually when you'll get into the object storage, and when you get the name of the object storage, that stores the key, you will also need to face with two things. One, you'll have to access the storage with the same account that is running this instance, this is one thing. And the second thing is actually a per editor that we had that we added to each request, when it goes to the -- out to the storage, and this -- 512 of additional data that we generated, so basically, you can try to get access to these object storage. It would be annoying, really. Okay. So basically what we did is creation of a dynamic policy on S32, so we have also dynamic policy with -- with account name and 512. Let’s just do the demo. I'll move to my spot. So here first of all, just proved we succeeded to connect the server to chef. It's not difficult. Basically here we can see the ronlus. So our ronlus contains volume encryption. You can have Apache installation, whatever you need with this script, and that's kind of the first part, and the second part, as I mentioned, we already encrypted volume. But we need to know where is the encryption key, so therefore I will need to connect with the server because I have no other way to know which volume I should connect to. Let's go to the secure instance. Let's connect to it. [No audio] So I'm getting into the Cloudefigo, just for demonstration purposes. >> [ Inaudible ] >> What? >> [ Inaudible ] >> Oh, yeah. Is it better? Okay. So, umm still and writing below. Yeah, I have -- here, so probably you won't see in the log I have a bucket name, like this. It's pretty -- we'll need to look for this bucket on S3, so let's go to S3 and look for that. We're on S3, as you can see we had a lot of demos. And here we will look for the bucket. And we'll go to the properties. So if you look at the properties, we'll see that here, in the permissions, we have added bucket policy and you can see here that we actually created access only to the specific bucket, only with a specific refer header. Yeah. I need to put it up but I can't. >> Sorry for that. >> Yeah. Anyway, Moshe? >> Okay. Moving on. So now we got an access -- we have an instance that is controlled. We have all the software we want. We have launched the encryption. We got the keys. What do we do next? Is this instance ready to move to production? The basic question will be, is it hardened enough and doesn't have any vulnerabilities, so this is where we do an automatic scan to launch for the instance. The nice thing about cloud is it enables you to automate the scan. Then move the item to production immediately. Move it to different security groups, all of those things that can be done automatically. So what we do here is launch. Analyze automatically the results. I think we set up everything over medium. If something is over medium, get a finding, over medium, we don't move it to production. It will stay on remediation. Anything lower than medium or lower, into production, sending us results. >> Want to finish off how it works. No. Okay. So we'll go to the NAS. We need to access the deck— okay. So basically we'll go to our NAS, yeah. >> Please don't record the IP address. Don't access it. >> Is it secure? No, it's not secure. Believe me. >> Moment of relief. So hard to do live demos. We have here the scan. Basically we can see here we have one low, one informational finding, so that means that the server should be on production now. Let's look at it. We'll just refresh it and we'll see what's going on with our server. We’ll just refresh it and see what’s going on with our server. So I'm going down. And we can see server is in production. Okay. Yeah. >> So we got a server, it's moving to production. From now on basically we finished the launching phase. I don't know the initialization phase, want to call it. We moved to production. A couple of things we need to remember in production environments. Permissions are lower, okay, the server has now done everything he needs in the automation part. We need to reduce permissions. This is the IM role, after Cloudefigo finished configuring it. If you remember IM in the beginning really that big. Now very specific couple of things, basically access to the S3 bucket where you can find encryption keys, so it's done dynamically. Reduce it dynamically. And then we put the ongoing management, right? What kind of ongoing management? Usually in cloud, use compensating controls. Basically what it means is, we are checking to see if we haven't launched somebody -- somebody hasn't launched instances not managed by the infrastructure that we created, right? We want to identify if there are a service somehow popped up somewhere and we're not managing them, right? And we want to use an alarms if somebody has managed to access server and trying to do something, right? How do we do those? Basically, for the first thing, for the -- for checking out if servers are managed or not, we pulling out from Amazon the list of servers and compare it to the list of servers that are in the Chef. Bottom line, if we see a server on Amazon, which is not in the Chef directory, it's probably not a good server, somebody launched it either by mistake or a malicious server. The second thing we do, we monitor Amazon cloud trail. Cloud trail is basically a logging mechanism for every activity you do on the dashboard or API, right? So basically what we do is, we look for those things in the logs. Those of you who have been trying and playing with Amazon cloud trail, it's not well documented. Again, it's new. Nothing better to say about Amazon, it's new. Not so much experience. Here is what we found out. If you tried to use the access keys -- sorry, if you tried to do something on Amazon servers, and do it and you get an access denied because don't have permissions it could come up as two different logs. One of them is access denied, one of them could be a client authorization operations usually if you try to do something in S3, you'll get the client unauthorized authorization. Other places, you get access denied. So we are looking for those two in the logs. This is pretty much -- this will be very useful for you, if you'll be playing with cloud trail. Again, it's a great tool. Just needs some more -- we need some more experience with it, us, community that is. So basically let's take a look on the -- >> Okay, so we'll get into the production role, and we'll see what exactly we have there. So we'll go back to this long role, refresh it and see what's going on. You can see that the policy allows us only to access very specific object on S3, so this machine cannot do anything anymore. So let's do a test. Let's see what happens. I'll go back to the instance, and I'll try to -- you can't see the text, but I'm promising you that you'll see it in a moment. So I'm just trying to access with AWS, SDK, to a specific instance where just IEM resource, so I'll just try to list access keys, which is pretty much something as a hacker I want to do. And as can see here, yeah, as can see here, I'm not authorized to do anything. But as Moshe mentioned, we're going to have an alert to present it. The thing is, as I mentioned in the start, it takes time to synchronize and get an alert, so we have two options. Either you will wait 50 minutes or I have a — probably will go with option two. So on option two, basically, we'll go to the alert, just -- >> Just to emphasize, takes Amazon 15 minutes from the moment you do something, basically to the logs, okay, so we don't want to wait 15 minutes, so we got a recording. >> So just short demo, basically when we go to the cloud watch, we'll see our alert, which is the same as what I did now. I just tried to access to the keys and eventually I should see the alarm here. In the same moment, if you want to get an e-mail or other way to get everything -- to get keys. Yeah. Well. [ Laughter ] [ Applause ] >> Thank you, guys, it's a methodical break. We have a little tradition here. You know what it is. How do I get shoutouts from the crowd? I can't do it anymore. Welcome to DEFCON. [ Applause ] >> It's good I'm already passed the demos. Let's go and just continue. [Laughter] >> You didn't see -- last night. So anyway, also decided to validate exactly what happens with machines that are not managed by the Cloudefigo, because basically you can launch instances but it won't be controlled, so in this scenario, we basically took the list from chef, and the list that we have on Amazon, compared it, and provided the output of what is not managed. So it will see here -- where is the browser? [ Laughter ] I'm fine. Believe me. So basically we have another API code in Cloudefigo, one server we can translate it to the name but basically this is our server, just wanted to show an input, something that we have in the list. Your turn. >> I think we're wrapping things up now. The next phase -- yep, cool. The next phase will be basic determination, right? Launching, processing, last thing would be termination. Basically you need to terminate the server, unless you do some kind of backup, different servers, you need kill the INR specific for servers and the most important thing, some of the attacks on the cloud are basically on data that’s thought to be deleted, but it stays, am I right? Cloud provider don’t like to really delete stuff. Right? They put it on shelf somewhere, and waiting for you to say can you restore that for me? And they say yeah, we can do that for a nice amount of money, right. But the problem is, basically being replicated all of those places. How can you do -- how can you make sure if you really deleted, there are a couple of ways to do it. Basically what is called crypto shredding. Data here is that that’s been encrypted and we need to terminate the key. Once terminate the key, that is useless. Again, it's dependent on your scenario where you keep the keys. Over here we did it from S3. You could say S3, you're correct, but then according to your threat, you keep it in the physical location, you can also destroy physically the key, and then your data is basically protected. It could be kept somewhere but it's useless, so don't forget about this crypto shredding, the shredding of the keys to make sure data is safe. So this was the last phase and also pretty much closing it, wrapping things up, so what we want you to take out of this? New software development methodologies and new infrastructure services are basically changing the way that we treat applications. Premise, production server was like the holy grail, you don't touch it. Periodic maintenance once every six months, pizza nights, people treat it like the holy things, don't really want to mess with it. In the cloud, we can send production environment change five times a day, deleted, launched again, right? This is continuous integration. It's changing, and need to change -- so you need to learn how to automate your security. You will not automate your security, you basically will be left out and they will call you once every while to -- just for you to give something, some kind of opinion, but they will not security in production servers. I mean IT Department, right? So we need a new thinking and think how to automate security. This is the new challenge for software development companies. So hopefully, demonstrated enough, how to do automation, what are the different phases. You can take it into different areas you can use Cloudefigo or build your own. You have the right tools to do it. I think we have a couple minutes for questions coming, if you have questions, we’d be happy to take one. If not, come and check us later, we’ll be around you get Twitter handles and any other ways to access it. >> Before the questions, we're also going to post updated link, if you want -- first of all, you can follow us. We're going to -- Cloudefigo and you can also get into the website, looking for contributors. We need to improve our documentation, our features. So you’re welcome to join. >> Thank you. [ Applause ] >> There was a question? >> One of the things you talk about, instances appear you aren't expecting. What about instances and such that don't die when you should have died? >> Instances that died that shouldn't have died. >> No, instances that didn't die when they should have died? In other words, you're looking at all these instances, lots of instances, lots of brawls, and you're expecting, say, these particular servers to only be around for, say, 24 hours, but this particular instance has been around for eight months. And you just don't -- aren't aware of that. >> I agree. We thought about handling this. I mean, this was one of the phases we thought about doing, but then we took a look at, I think it's called security generator from Netflix, and it's pretty awesome tool. I'm -- it's a -- monkey or gorilla, security, something like that. And what it does, it overviews the configuration and it terminates all the unnecessary instances,roles all garbage left behind. So basically we said, okay, good enough tools so we won't go into that. But I agree this is definitely a challenge and needs to be addressed, because a lot of junk that is filing in. >> Well because I mean as an attacker, using this tool, if I can jam your shut down procedure, that's almost good enough. >> I agree. It's a problem that we're not solving every world problem here. Thanks a lot for the comment. Any other questions, guys? >> No more. >> Sorry. No more questions, Kevin? Sorry, thanks a lot, again. [ Applause ]