DEFCON 2015 08/08/15 - 10 AM TO 11 AM A HACKER'S GUIDE TO RISK "This text is being provided in a rough draft format. Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. " >> So Paul is a goon. And he's a runner. He may not look it, but he does run. Um -- [ Laughter ] Paul also, um, is on Fitbit, how many people here have Fitbits. They are little crack devices, right? You look at them, like, I don't have enough steps. Ah, or my friends have too many steps or whatever. So um, Paul was training for a marathon and so you have like the weekend warrior challenge and you challenge all your friends because you're like, ah, jackass and you know you're training for a marathon the rest of us are just mortals. They get a little cranky because like, you know, Paul suddenly has 40,000 steps or 100,000 steps or whatever, um, and everyone else has like 20,000. So Paul's marathon is coming up -- this is in the past, imagine I'm telling this story in the past, um, Paul's -- [ Laughter ] I've had like half a cup of coffee from my hotel room. So I'm pretty cranky. So ah, ah, Paul's getting ready to run his marathon and, um, my wife and annoys are constantly in challenges for, ah, what, you want me to use proper name? No -- yeah. Exactly, it's, it's -- it's a logical reference, it's not an absolute. So um -- [ Laughter ] Anyway, so Paul's getting ready and they decide, like, man, they are sick of losing -- [ Laughter ] By the oh, no. No. No! I love you, dear, Paul's getting ready to run the marathon, and Annoy is in hiding decide like screw this guy, he's not going to win Fitbit the weekend that he runs the marathon. Because he's been training for months. So out of spite, they're like, fuck this guy. So -- [ Laughter ] They decide they're going to walk a marathon and then some so that they can beat Paul, but they're going to do it like all stealthy. So Annoy flies all of the way across the country to Maryland. They turn off blue tooth on their phone so no one gets wise to it and they go out on a 30-mile walk! And then like a grand reveal at the end of the day, after he's run his marathon, they're like fuck you Paul. [ Laughter ] So -- now the cute thing was, A, um, ah. >> Every fuck word of this is true. >> Every bit -- >> Every word of it. >> So -- >> And the part about fucking with me that the end of the day, I was like -- I was like -- eat that! [ Laughter ] And then all of the sudden it's, you know, I, I have like -- sick, so a hair thon is about 55,000 steps with all the other stuff I had, I had 65,000 steps and all of the sudden -- boom! [ Laughter ] >> So um, we made them badges for their non-a-thon that we ran. We felt bad we didn't make one for Paul. So Paul, here is your non-a-thon 2015 spiteful badge. [ Laughter ] And, and just so you know, at the end of the day, Heidi's um, um, um -- ah, Fitbit didn't register all the steps. She was like 5,000 short. So in order to make sure that she beat you, she put it on me and I walked around my house in the dark at 10:00 at night until I had the right number of steps. I swear to God, before that they were uploaded, I'm like walking around my yard -- [ Laughter ] Can I come in yet? What's it at? 48,000. No! Just keep on cruising. >> Well, this, this is a great honor. Thank you very much. >> Oh, well, ah, thank you, Paul. [ Applause ] All right. So um, you've come to DEF CON and, and you went touchy-feely talks about risk apparently. So um, this talk was born out of rage. Ah, I remember, ah, I, I don't exactly know what vulnerability it was, but I was sitting around some night and I was reading twitter which is generally a bad idea anyway, but some vulnerability was just announced and everybody was like, oh, this is terrible. This is the end of the world, all kind of, the sky is falling, whatever, and it was some bullshit vulnerability. And I thought, man, we do a really lousy job of understanding and managing risk in this community. So I like kicked out like a rage RFP response to see if P respond to DEF CON, like screw this, this risk is important, whatever. And I sent it in and figured that would just throw it in the bit bucket and instead it got accepted. So um -- I have to go ahead and give the talk. Um, there would be a lot of stories here today. I start all my talks with the same basic premise that you shouldn't believe anything I say. And that's -- very general, broad statement and I would encourage you to apply that mantra to every talk that you attend here at DEF CON or any other talk. Or any other conference. Um, particular, I have no degree. I have no certifications. I'm a straight out college dropout of the University of Alaska, Fairbanks. Um, anyone -- anyone go to UAF? Did you graduate? >> Yes! >> Holy shit man, really? [ Laughter ] It was cold. It -- stating the obvious the man says it's cold. That's the quality of education you get from the State of Alaska. [ Laughter ] [ Laughter ] That's it's on the diploma. It's fucking cold. Um, I'm also founder of this new group CTO of a place called the KW Corporation. The, the thing that gets me, I've been coming to DEF CON since DEF CON like seven and, ah, I've been speaking since like DEF CON eight. Which should tell you -- ah, the really frustrating thing is, it's very easy to convince people that you know something, but it's very hard to actually know the thing. Right? When I -- um, wanted to get into this industry, I fancied myself like -- really smart and I knew a lot of shit and I was going to tell people about it, but I really didn't know much, but I did have a desire to tell people. So and a desire to tell people shit, and decided I'm going to write a book. So I decided I wanted to write a book on AO211 security. And, um, I went to O'Riley's website and downloaded the so you want to write a book thing that they had. And you submit like an application to O'Riley for a book idea. And the application is mostly marketing material -- like who's going to buy, why would they buy it? What wouldn't they buy it, what are the competing books in this space? Give us a brief outline that's like a page long, tell us a little bit about yourself. And then we'll decide if we're going to give you some money to write a book. And so I -- send all this stuff in to O'Riley for AO211 security. And they send back a FedEx package three weeks later. Congratulations here's your contract, sign it, and you'll be authoring and start writing the book. And I thought, oh, shit, I need to start learning something about AO211 security. Like, because I knew exactly nothing. I actually owned no wireless devices at the time. [ Laughter ] So it is like I should go out and buy some cards and figure this out. And, and that's honestly a lot of what you read and a lot of what you hear are people getting these ideas and then getting up on-stage and spouting like random shit. Um, and, and -- [ Laughter ] You giggle, but it's true. You need to challenge everything that you hear at DEF CON and other events because the people that are up here are motivated by all kinds of reasons, be it ego, be it business, ah, all, all kinds of different things. This is, to be super clear, we are really grumpy about product pitches when people get up on stage here, but in reality almost every person that gets on stage here sells a service and that service comes out of their brain and their knowledge and they get up here and they do cool shit and they look like rock stars. They get money for it! Right? Consultants get rich by getting on-stage to tell you things. So the motivation while it looks altruistic is oftentimes still economic. It's just like the product vendors. Don't think because we're on-stage we're not whoring ourselves out still. So there, I got a little ranty early. I'm sorry. [ Applause ] >> This talk is now canceled. [ Laughter ] >> So I'm just killing time before questions. Um -- [ Laughter ] Um, anyway, and, and at the end to the day, you need to hold me and you need to hold everyone in the community accountable. Um, there's been a lot of crazy bullshit that's happened in the community in the last couple of years and there's very little bullshit calling going on. Um, so my -- my wife runs ShmooCon, I help her out with it, it is a conference in D.C., anyone done to ShmooCon before. It is a smallest con, like 2000 people, we sell out in like 20 some odd seconds. So we're unfortunate, yeah, F5 hit F5 until you get a ticket. It's annoyingly popular is what I like to call it, but one of the reasons we started that con is because we were sick of coming to other cons and listening to people on stage spout total bullshit and nobody calling them out on it. So we decided we wanted to, ah, find a way to engage the audience better when they thought, man, this is bullshit but I don't want to stand up and be that person, be like -- I disagree! Because if you're 50 rows back in this talk and you stand up and say I disagree, like the spotlight's on you man, and you better have a good reason for standing up and interrupting. So what we did instead, we armed everyone with squishy foam rubber balls. And we called them smooballs and we encouraged the attendees to throw them add the speaker if you disagree. Um -- [ Laughter ] You have to inform the speakers of this ahead of time -- [ Laughter ] Ah, because they will get a little grumpy when projectiles start coming their way, kind of like, WOO! So this is kind of, so it does turn into a little bit of Blues Brothers. Which is good, though. And so I would encourage you, that mind-set, ah, when you come to a talk like this, if you disagree, stand up and be that person and hold the person on-stage accountable? We're when I gave my first talk at DEF CON, I swear to god I was so nervous I thought the whole audience is going to stand up and call me out for being wrong on something, and it's going to be the worst thing that ever happened to me. Nobody called me out for anything. I have been called out like half a dozen times in the last 20 years, which is total bullshit because I have flat out lied about things just trying to get people to say something. And they don't! So um, anyway, next. Ah, Fukushima. [ Laughter ] Ah, just -- segue, like, let's talk about nuclear disasters now. Because that's kind of what this talk is turned into. Um -- [ Laughter ] That was pretty good. Jokes are coming fast this morning. I'm feeling meth is kicking in. [ Laughter ] I'm glad there's a camera here. Um, so let's understand some situations where we fail to understand risk appropriately. The first one is Fukushima. So Fukushima Daiichi was the number one, number one -- that's how many fingers I need, number one reactor. What's up Dan? Thank you. Um -- [ Laughter ] Excellent. Um, was the nuclear reactor in Japan, I think we're all familiar with the Fukushima incident. It's kind of a big deal. 9.0 earthquake, tsunami hits the nuclear reactor built on a cliff wall next to the ocean and the earthquake-prone zone. Question mark? Seems like a proper place. Like the settlers got there and like -- well, there's nice fish here, but we should also maybe build a nuclear reactor. Oh, that's a great idea. Um, as if Japan got settled. I know nothing of Japanese history. So I just invented Japanese settlers by the way. So -- [ Laughter ] They don't exist. Thank you. Jesus. That was, that was some poor history going right there. [ Laughter ] Anyway, the interesting thing from a risk perspective for me was the report afterwards called the disaster manmade. Right? And it wasn't 'cause the tsunami was manmade, but it was a whole series of bad decisions that occurred before and during and after the earthquake that led to the melt down of three cores of nuclear material and -- all kinds of nuclear material being released into the environment. Right? It was not because the tsunami, it was because they didn't prepare. They didn't build it up to spec. They didn't have processes and procedures in place. And then when the incident happened, they continued to make bad decisions, mostly in the, I think everything is okay mode. Right? Like -- I know that we released some steam that had radioactive material in it, but we could have released a lot more. So you should be happy. You know? Like, smiling happy, the Domo Cons are returning around. How do you think they got Domo Con? Not many people, anyway. All right. Next. Um, so Deepwater Horizon, another horrific incident had occurred. 11 people lost their lives. And 2009, ah, ah, Deepwater Horizon built the deepest well in history. 4,000 feet under the sea they started drilling and went for another seven miles until, ah, apparently they got to oil. I thought that's as big as the planet was and they would have got to China, but I was wrong. Again, really poor understanding of geography and history, but -- um, in 2010 the goddamn thing blew up. Right? Killed 11 people. Worst oil spill in the history of mankind. Ah, and, again, this was largely their own doing. The, the, reports and the judge that was involved found that there was a disregard for known risks, ah, there was, you know, just wants, and ridiculousness that was graphic. The judge didn't used wants and ridiculousness, that was my own trademark phase. There were leaks, there were safe tests not done. They were way beyond the realm of reality when it comes to are drilling this well and knowing they wouldn't be able to cap it if anything went sideways in the bottom of the ocean and it took months to cleanup. And the whole time, BP's saying like, oh, I think, like, I don't know, 65,000 gallons are coming out a day. And everyone else is like it's on order of a million gallons of oil a day are coming out. Like we're two orders of magnitude away from each other. BP is like, nah, it's not that much the oil's 60 feet in the gulf today, coincidental. The tuna must have exploded and all the oil from the tuna came out. You know? Um, so BP, again, like, people all along knew that bad things were happening, but they didn't judge risks appropriately. People lost their lives, we're still cleaning up oil in the gulf. Now, on the flip side of that discussion of Deepwater Horizon, I was standing in the Dunkin' Donuts one day, ah, during the oil spill, this goes somewhere -- [ Laughter ] Um, and, there's these kids in front of me buying like, they are like 8 years old and they're buying turbo hots. Right? So turbo hot is like a cup of coffee with espresso in it, which is something an 8-year-old probably doesn't need, I'm going to go out on a limb and say no. And this woman's in front of me and she's shaking her head, and I'm kind of looking at her, and she's looking at me, I'm like, wow. That's crazy. She's like, wow that is crazy. You know what else is crazy. I'm like, I love sentences like that, I'm like, what? Please tell me what else is crazy. And she starts talking about the gulf oil spill. I'm like, yeah, man, it's, it's really bad. Like there's a lot of oil coming out. She's like, yeah. And then -- ah, but -- when the gulf and everything around it collapses it's going to be terrible. I'm like -- the gulf and everything around it collapses? Tell me more. So -- [ Laughter ] In her mind, the Earth is like a layer cake. [ Laughter ] She explains this to me, that there is a layer of oil in the middle of the Earth that we're tapping into from all over the place. And all that oil quote about quote is going to come out from the gulf and then they'll be a big gap there and the water will cause the gulf to implode and then a tsunami will wipeout everything. She was legitimately concerned in his Dunkin' Donuts that any time in the next few days her life would end due to the Gulf of Mexico collapsing into the layer of oil that was no longer there. I'm not making any claims about "Fox News" here. Come on. [ Laughter ] But -- I'm like, you don't understand the risk either. I'm like, it was amazing to me, like the lack of knowledge. Her base understanding of the situation led her to make very poor risk decisions. And she was about to go full-blown zombie bunker mode, because she was convinced the Gulf of Mexico was going to implode. It's not a thing to be clear. For anyone else that had that Earth view, I'm sorry to have shattered it for you. But the Earth is not made of layer cake. Oh, my god, I'm not even repeating you, man. Normally you repeat the question, but that was just dumb. [ Laughter ] Sorry. Todd. [ Laughter ] So the other side of the coin, Takata airbag recall, anyone had their car fixed yet from Takata? A hand full of folks. Let's look at this. So Takata knew about the defects in their airbags in 2004 and they try to cover it up. There's documented evidence they were doing tests after-hours to try to figure out how bad this was so that it wasn't on the books. So that, you know, if, they wouldn't force a recall, they wouldn't have to spend all this money to rebuild all these new airbags and all, all that kind of stuff. Um, 34 million cars ultimately affected by this recall, and that number continues to grow. Um, it might be like -- am I being transcribed in realtime for DEF CON? While, that's cool, and I apologize for the actual F-bombs that show up in words. Um -- [ Laughter ] -- like -- oh, is there indiscernible parts? Wow. I will try to speak with more enunciation in this, discernibility then. I'm going to watch myself over there now. You can all go, because I'm enamored with that technology right there. [ Laughter ] LAAAAALAAALLAAA. Let's see it. [ Laughter ] [ Applause ] Oh, my god, oh, my god, I never lost it like this before. [ Laughter ] I love you other person. [ Laughter ] This is why you wake up at 10:00 to go to talk at DEF CON to be super clear. [ Laughter ] All right. [ Laughter ] All right. All right. So -- WOOO! Math. Oh, the math -- the recall. So this is based in -- Jesus Christ. [ Laughter ] It's -- it's -- [ Laughter ] It -- [ Laughter ] Is this a test? Like -- [ Laughter ] Holy shit. [ Laughter ] Oh, my god. I'm going to be dehydrated from crying. Oh. All right. So the number of cars on the roads in the year in U.S., this is made up numbers that I Googled. Um. So you can disagree with it. Um, number of cars on the roads in the U.S. about a quarter billion. Number of accidents per year about 5, 5 million give or take. It's really hard to find numbers about specifics that how severe accidents are and how often airbags are deployed. Back in like the '80s and '90s and people were like worried that airbags were killing us and causing cancer, there were a lot of good statistics around airbag deployment, but anymore it is not something that is tracked as well as we would like. So let's say for the sake of argument, 100% of the crashes, ah, out there, in involve, ah, Takata airbag. So the number of Takata airbags, potentially, that occur every year in the U.S., ah, that are involved in the deployment is a quarter million. Okay? You can read this math. I'm not going to go through it real heavily. But -- I've gone through it a number of times. So, it's about a quarter million Takata airbags are year are deployed due to crashes. Now, the neat thing is not all toccata airbags are shit. And as it turns out the vast majority of them work as designed. So according to testing, ah, the -- on average, you'll have between .04% and .0 8% of the airbags deploy and blast shrapnel into the face of the person who is involved in the accident. So let's say that, that's like 188 people a year, give or take given my math. So that quarter of a million people, 188 of them are actually hit in the face with shrapnel. Now, unfortunately, some of these people die anyway. Right? So the shrapnel in the face isn't that big of a deal because they caught a guardrail through the chest. And that's the reality of car crashes. Right? Like people frickin' die. This feels like the start fight club. Right? [ Laughter ] Yeah. So you know, let's say 100 people a year are blasted in the face with shrapnel in the United States due to the Takata airbag deployment problem. Right? And that's worst-case. Like I have been totally bad at the math here in presuming worst-case scenario for all possible things here. So 100 people, probably less, affected by this. It resulted in the largest recall in automotive history. Costing easily over a billion dollars. Easily over a billion dollars. People losing their jobs. There are all kinds of money being spent on this that doesn't need to be spent. Hurts the economy, whatever. Is presenting 100 injuries a year worth the billion dollars? I don't know. Like I'm not here to make the claim that it is or it isn't, but when we make decisions about like the largest recall in automotive history and we see people getting drug up in front of Congress, and Congress critters berating them about, like, hey your airbags are killing babies or whatever the hell's going on, to be clear, your baby should not be strapped in a seat with an airbag -- um, you know, the, the question doesn't come up, well, it's only 100 people and we're going to spend a billion dollars on it, does it make sense? Because that does not get your ass reelected as it turns out. What could we do to save lives with a billion dollars that isn't replacing Takata airbags? Right? 500 people a year die from TB. Die from tuberculosis in the United States. I bet you a billion dollars would help those people. 500 people a year die from accidental gunshot wounds. 40,000 people a year die from suicide in the United States. I assure you, a billion dollars would go a long way to suicide prevention in the U.S. And I feel really bad to say that 100 people -- [ Applause ] -- that get hit in the face with toccata shrapnel, hey, man, I feel bad about the shrapnel, but for you getting hit in the face 100 people die from killing themselves. Sir? >> ( Speaker off microphone ). >> Well certainly. So the, the, the, the comment is, it's about brand protection for Honda and other organizations that have been -- ah, effected by this. Because no car company wants to come out and say, nah, not important. Like General Motors is not going to stand up saying I know you guys are doing something about this, we're just refusing not to, right? Because people are going to be like I'm not buying a GM. Right? So, everybody else is doing it, all the brands have to do it. So it is absolutely about brand protection. It is not about what we think it's about, which is saving lives and stopping people from getting hurt. >> ( Speaker off microphone ). >> Did they? >> ( Speaker off microphone ). >> Well, that was ignitions. That was a different problem. That was people too heavy keys. But anyway. [ Laughter ] So -- ah, I've now been running around hear giggling at the transcriber. I should probably talk about risk in some meaningful way. So like really, what the hell is risk? So the, the question we use the term a lot, but it's really hard to wrap your head around like what specifically is a risk? I get this question a lot. So I do like commercial consulting and help kind of arm people to make better risk decisions. And they will come to me quietly in a corner and like, I don't know what an actual like risk is. Like I know things are bad and, and bad things can happen, but what is risk actually mean? So in the abstract, different organizations have different views of risk. Right? A bank is a situation where loss of funds or profits can occur. You know, manufacturing is where IT is and gets lost. In general, if something you care about is put in harm's way, it's a risk. Right? We'll get more specific in a minute, but I don't want you to get mired down in a bunch of real mechanical views of risk. I want you to be thinking kind of abstract right now. I want to set -- level set on some terms, technical versus business risk. So there's things that can be bad to us that can hurt IT, that can hurt engineering efforts and that kind of thing, but don't necessarily have an impact to the business in any kind of material way. That's a technical risk. A business risk is something that impacts business operations that, you know, the front office would care about when they look at all the things that go on across the organization, like this is a risk and it has material impact on my line of business. That's a business risk. Another phrase you'll here is inherent risk versus residual risk. Inherent risk is there's, there's, this is bad thing that can happen and it, residual risk is now I have compensating control and the likelihood of that bad thing has been reduced by some amount. Right? So inherent risk is I built a system it is full of frickin' holes people can own it. The inheritance risk is, I take some compensating controls I patch a bunch of things now the residual risk is much, much less. You hear these terms bannered around lot and it is just important to know the specificity -- specificity that you need to use these terms. I'm not going to use them add all again today, which -- may disappoint some of you, but it makes me happy because I hate having these kinds of discussions. You just need to understand there are different types of risks that organizations care about. I'm scratching the surface here. There's a very deep rathole that you can go down. And we're not going to go down it. Um, there are a bunch of risks, frameworks out there. So has anyone had to use, ah, in NIST 830 before? Yeah. That was, was it fun? It's a good time? Good times. Those risk management frameworks are good times. No. They're not. Um, there are a lot of frameworks out there. Um, the NIST 800 framework has gone through a revision, it was 830, and then 830R1. Um, it, it's actually decent in the sense that there's a concrete process you can walk through and you don't have to pay for it. Um, that's your tax dollars at work. Getting you something that you can do to materially go and, and, implement a risk management program. Some of these other frameworks that exist out there through ISO and COBIT and things like that, you get to pay money to, to use. The NIST stuff is all free. And you may not agree with it, but it's a decent starting point. The NIST cybersecurity framework, so I was involved in, in the creation of that. Did anyone attend the cybersecurity workshops that they had around the country? Um, probably not. NIST, oh, one, one person in the back. Hello, sir. Do you remember me from such things as the NIST cybersecurity framework workshops? >> ( Speaker off microphone ). >> Yeah. He was there. Okay. He's like, yeah. Whatever. [ Laughter ] That's -- that's ligate. Get the guy, yeah. Anyway. So ah, the NIST cybersecurity framework is interesting because they were very concerned. So this is actually a -- um, a mandate that the president gave NIST to create ah, some sort of framework that companies and, and the, the public sector could use in order to try to measure and have a kind of a common view of how they thought about cybersecurity risk. Um, so in the State of the Union in 2013, he got up on-stage and said, hey, NIST you're going to do this thing. And NIST said, oh, shit. And, and what they did is they went out and they held all these workshops with, with, good infrastructure people, from transportation, healthcare, finance, that kind of thing, and said, what do you do today when it comes to risk management? And there was just this flow of information to NIST, and NIST had to sessions is it down to come up with a cybersecurity framework. Now, the interesting thing that NIST was very concerned about in the creation of the CSF is they didn't want it to be something that you did. And you may ask why. Well, it's because they wanted to make sure that they weren't creating something the government agencies would be held accountable and possible to doing and, and, therefore, then creating regulation. Right? There's a concern, ah, in organizations like NIST that you create de facto regulation when you come up with a way of doing things and specific hurdles that you need to jump over because everyone will point to that and say, you need to do that. And so NIST was very cautious, has anyone read the CSF? Anyone? Ifs? A handful of people. It is an interesting can document, I'm going to touch on it more in a minute, but if you read it, the one take away you'll have the first time is like I don't know what to do. Right? And that's on purpose. That is 100% on purpose from the CSF. So you're not alone, ah, but there are ways to operational lose it. Varus' idea of value at risk. Like I stand to lose something of value, how much of it would I lose if this event happened? And so in certain domains like financial, manufacturing, things like that, we have hard matrixes around how much I would lose if a bad thing happened. Right? Banks are great. Banks are risk management machines. Why are there safes to keep people from stealing the money. When the safes are good enough, people stop stealing money from the safe and they start shooting the teller instead. You know, you push the risk somewhere else. Sorry. That was a little morbid. Um, but -- the banks are very good at understanding risk and putting in controls. That safe is a control because there's a dollar amount they can assign to the loss of the stuff inside the safe which coincidently is exactly the same dollar amount is the dollar amount inside of the safe. Like it's a very easy linear connection that you can make. In, in cyber, I'm going to use the word cyber and fuck you, if you don't like it. [ Laughter ] So -- thank you, yes. There was a few of us that believe that cyber is an okay think to use outside of IRC. Yeah. I have been canceled twice. Jesus. ASL? Um -- no one uses IRC anyone. ASL comment like eight people giggle and we're all on lists. [ Laughter ] So -- um, for that don't know ASL means age sex location. Anyway, whatever. [ Laughter ] I don't know who I'm talking to anymore. So cyberbar, cyberbar is a way to, an attempt to assign value to bad things happening in cyberspace. So it's much more of a structured approach, what I'm going to lose, how much am I going to lose. What is the likelihood? It is very math-driven. And math is cool. It is okay. I'm down with it. Gabe Besset, if you want to follow him on Twitter does interesting stuff in this area where he is doing large scare graph analysis of all of these bad events that occur. I mean, it's really fascinating. But it's not all that pragmatic for people like us. At the end of the day, when I walk into an organization and have this ASO pulling me aside saying I'm not really sure what risk is, I'm like you don't need a lot of fucking metrics. I'm going to tell you that right now. You don't need to go do graph analysis. Okay? We're not going to start there. That's advanced, that's like 202 or 203 level kind of stuff. Um, there's also things like NIST853, COBIT, ISO which are control frameworks. If you kind of squint, you can turn into risk management frameworks. I want you to be topically aware of these things. You don't need to be intimately familiar with them, but if you're going to take my presentation later and look for a starting point, Googling these things is a good place. We kind of have our own risk framework in, in a manner of speaking and it's a very, um, ah, you know, quantified one. So the CVSS system, which is how we score vulnerabilities. We, National Vulnerability Database and MITRE score vulnerabilities it's an open standard that attempts to define the severity of a vulnerability in a scale of zero to 10. Right? So, there's 14 different matrixes. They all get rolled together and you, you assign number values to all these things and then you pop out a number out the back end. So um, if you've looked at a CVE you'll see, you know, there's all this information about the C, about the vulnerability itself and then underneath it will have impact where they have the CVSS scores. This is one random, this is the first Apache hit that I found this morning, ah, you know, and you see the base score, the impact and the employed exploitability score. So the impact and the exploitability basically get composed into the base score. And there's a fricking number, man. A high-resolution number. Okay? How many different values are there between 0 and 10? When you go to the 10th place? 1 fricking hundred values of risk in the CVSS. That is way fucking too many. You need five. Maybe. Wow. That was the voice of God getting all pissed off at CVSS this morning. Dropping F-bombs on CVSS. I feel passionate about some crazy shit you might be thinking. Um, that's what I was thinking, actually. Um, so -- it, it turns out like you give, giving people way, way, way, way too much to contemplate when you have 100 different levels of how bad something is. Right? But it feels good 'cause it's math. Right? Like you -- [ Laughter ] Math feels good. Like, simple arithmetic feels good. Like linear algebra was a horse's butt, but -- [ Laughter ] A few of you that took it, like, yeah. Yeah. That was bullshit, and then discrete, fuck that noise. [ Laughter ] And the liberal arts majors are like what's math? [ Laughter ] Um -- I'm a college dropout. I can say these things. Okay. So -- [ Laughter ] Geez -- so in the CVSS. There is also the CWSS. So just replace the V with a W and it turns out it is actually a little bit more interesting. CVSS is all about specific concrete vulnerabilities. Like you have to have a vulnerability in order to assign it a score. In this one, with weakness, you can just have like the concept of a, of a vulnerability, a weakness. Like, this thing may happen. It may be there. And if it was there, how bad would it be? And so the CWSS is a little bit more abstract. Still gives you a number. Still a little high in the resolution of that number, but it's, I think, in a very interesting way of trying to think about bad things that might happen. 'Cause vulnerabilities are bad things that are pretty much going on. Right? You know, there -- so let's talk about vulnerability discloser for a few seconds because I feel like it. So um, I feel pretty passionately about the release of information regarding vulnerabilities and weaknesses. Um, and, and when I say that, I mean, I think that the replace of that information should benefit the end user as much as humanly possible. And disclosing that information to third parties like security product vendors, usually just makes the security product vendors richer and doesn't make necessarily, necessarily make the end user safer. Okay? And in particular, there's been some very interesting studies that say, oh, days that are being exploited by limited number of people, or potentially no one up until the point of discloser have a very small blast radius of people they harm. Once the vulnerability is disclosed and the patch is out, people that write the bad stuff can very quickly weaponize that and put it out there and start doing bad things to everyone all at once. So the discloser of the vulnerability is usually the tipping point in which bad things happen due to that vulnerability. So if -- wow. I don't even know. Like, I -- [ Laughter ] I was ramping up and I fell right off. It is like that drone, it just went POW! Right in the bottom there. Damn it. At the end of the day, I get really cranking about the third party exploit sales that go on in this industry. Like really, really, really cranky. It makes me grumpy. Um, because it doesn't do anything to protect the end user. Right? Um, and so when we think about vulnerability, in this day and age, when you're looking at CVE, that CVE means bad things are happening right now due to that thing. CWSS and, and the whole system that MITRE has around weaknesses is us thinking proactively about bad things that might happen in the future, which is way more interesting and useful. Right? 'Cause my legs already been sawed-off, there's not much I can do to reattach it, but if the other leg is about to be cutoff, I can probably stop the zombie with a shotgun or something. Zombies wield chainsaws in my dreams. So just to be super clear about that psychoses. Okay. So we talked a lot about it. Why is risk important to hackers. It is a ton of acronyms. I apologize, I felt important that you, you -- get exposure to it. Um, it turns out in our community when we measure risk inappropriately, on either side, like, I am on the Deepwater Horizon going everything's okay. Or, I'm in Dunkin' Donuts saying the world is going to collapse because all the oil is leaking out, both of those things make you look bad. Right? To be clear. And, and in our world, when the latest vulnerability's released and, and we write blog posts and things say the sky is falling when it's really not or we say, oh, no. Everything's cool. You know, ISIS is just a bunch of JV basketball players oops no they're not, it turns out they have their own country now. These are things that the people that say them don't look good, because and bad things can happen as a result of mishandling risk. So -- really? I have 10 minutes? Holy shit balls, got to go. [ Laughter ] Time to simplify. So all these frameworks that are out there I think are interesting. But they still don't answer it question, what specifically is a risk? And so when I think of risk and I think of threat -- I can have 15? You just didn't have an extra five fingers to show me? I got two, you got three, we're close. [ Laughter ] We have, we have -- I think, 13 minutes left now. Um. [ Laughter ] So -- um, when I think of these things I like to have kind of concrete syntax in which to think about risk and threat. This is of my own doing. Okay? When you Google this, you will find me and that's pretty much it. So if you disagree with this, feel free to throw some stones, but, um, I've used these models before in the past and they've held, they worked very well for me when I've worked with my clients and worked on different projects. So I would like to encourage you to think about using them. So the risk syntax is, there's a likelihood, um, that, ah, ah, some cause will result in impact. I'll get into a little bit more specific in a second. The threat is accurate to action to asset per outcome because motivation. We'll get into all this in just a minute. So just buckle up. It's okay. And the vulnerability is, is really just a weakness that can be exploited. I'm going to walk through a concrete example of this so you can see. So when someone asks you, what do you think a risk is, you may not like this, but you can at least point to it and say this is what, um, a risk is. So here's a risk. And it's highly likely that an attacker will gauge access to our database server, leading to the loss of all personal information in the database and heavily damaging our brand. So looking at the syntax that I proposed, here's the likelihood. Right? It is highly likely. It even has the word likely in it. That is the likelihood. Here is the cause. An attacker will gain access to our database server. And then here is the impact. We will lose our shit. Right? I like, this is kind of fancy. The colors in the background and everything. I thought it was very nice visually it popped for me. So hopefully you all enjoyed it. That was, that was like the big -- BOOF of my entire presentation. Threat, the same deal. We have actors. We have actions. We have assets. And we have outcomes. Now, if you were paying attention, which I hope you weren't, but earlier in the syntax of threat I also had the world motivation. Um, I have -- clients that I deal with that care about the motivation of the people attacking their systems and I have clients that just could give a shit. Right? Because bad guys are doing bad things and you know, haters going to hate, and they just don't really care why. So I oftentimes when I go through threat modeling and do this threat process, will not include motivation because it ends up not being important. All right? So when we, um, first of all, if you want to learn more about threat modeling, go Google my talk from DerbyCon on threat modeling, I walk through a very concrete process in which to model threat. But it's the easiest way. I'm not going to put a God damn URL, there is a URL on the fricking PowerPoint slide. Do you ever write it down or say I'm Google that shit. So I'm just encourage you to Google that shit and not going to actually preside you with a link, because it involves Iron Geeks website and like a huge long line of explicits because I put like shit and crew you in the talk. Anyway all right. I swear, I did not swear at all at my black hat talk. So I'm just making up if it right now, I'm just dropping F-bombs of wanting disregard. There is that term again. Um, anyway -- Jesus. So here's a specific example. So there's this high level risk, it's really likely people are going to break in to our stuff and they're going to steal all the information that we care about. Why? Well, there are external actors that can go do sequel injection, they're insiders that can log into stuff. There are national states that want to backdoor things and we have these specific vulnerabilities. Right? So the vulnerabilities and threats are all composed up into this risk statement that you can fake and hand to people that don't really understand the technical piece, but they can understand, like, oh, there's bad guys that are going to get into our database and steal our stuff. Like I can appreciate that as a risk. Right? This, this taxonomy of risk, threat, and vulnerability is something that I think works pretty effectively at communicating risk in our domain. And again, this isn't something that is documented in NIST standards or something like that, this is something I have learned from trial and error. Again, you may not like this or you may disagree with it or it may not work for you, but keeping it simple when talking about risk and having clear distinction around risk threat and vulnerability, I think it is important. It's also important, um, in how it gets composed. Right? So in 2002 NIST830, when you said, hey, how do you measure risk, they had, they actually had an equation, vulnerability times likelihood, times impact. Right? So how likely is this thing to happen. What's the impact going to be. And how severe is the vulnerability. And you compose it together and come up with a number or something, I guess, right? Well, as it turns out, um, they thought, well, that was a little too simple so we'll come up with something a little bit more complex. Because risk is nuisance, right? And so this is stolen from NIST830R1 from 2012, I will say they are not the same thing. Right? This is a much more complex view of risk. I'm not saying the 2012 publication is wrong, I'm just saying it's much more complex view and if you're trying to sink your teeth into risk for the first time I might encourage you to read the older documentation first and that will help you understand the newer documentation, which is a much more complex thing. Now, 4:37 or, yeah. You were familiar with the NIST series, sir. That's head nodding. You may be a fed. Um -- [ Laughter ] He does -- oh, you're black badging it, though, is that a black badge, sir? No? The old human. Why is it black? They had black human badges one year? This year they have black human badges? Jesus Christ, man, tell me what's happening, cats and dogs living together! All right. [ Laughter ] So um, the NIST scale I think it is interesting as far as looking at likelihood, they have five, like I said, you only need five numbers, you don't need 100. Right? Almost certain, highly likely, somewhat likely, these are like relatively subjective, but risk is frickin' subjective so just pick five. I think what's very interesting is one of the guys I work with had this model and he said I only need three. And specifically high likelihood is it's happening to us already or happening to companies that look like us in our vertical. Medium is it's happening to companies in other verticals, but not happening near us. And low is, it's just not frickin' happening, man. Like, because those are really the three states of shit that it can be in. Happening really close, happening far away, or not happening at all, but I like that model, sir. If he'd let me use his real name I'd put it on it, but he won't, so -- that model is very interesting. So we went from five to three, we're making it even similar. Impact the same thing. You need five, you don't need 100. Okay? So very hot. You can read the goddamn slides. Right? This is a great part of the NIST, um, ah, risk stuff because it's, it's very simple, it's very -- low resolution. You know, five, pick one and run with it. So when you make -- compose all these things together, you take NIST, you can take these kind of simple risk statements and threat statements and you can start to compose them through a process. Now, before I get to the process that you can go through, um, I do want to look at what happens when we, why do we make bad risk decisions and a few bad risk decisions we have made in, in, in our world. Um, misunderstanding, any of those nouns, leads to a bad risk decision. If you don't understand the assets that you have, if you don't understand the likelihood of an event occurring because you don't know the threat after space and what state-of-the-art is with respect to attack. If you don't understand the complexity of an action and how hard it is to exploit something -- you will make bad risk decisions. So in order to be good at risk management and understanding risk, the one kind of bummer of a side thing is, you need to kind of be almost an expert in a lot of stuff. Right? You need to know a little bit about a lot in order to make good risk decisions. And that's kind of hard because we oftentimes in our industry, we tend to highly specialize, especially early in our careers. You know, I'm a vulnerability analyst, or I reverse engineer shit, or I'm a network security guy or whatever, that means you don't get the broad visibility you need in order to make good risk decisions. Right? There are consultants like big, expensive consultants that do risk management all the time, you think those guys are paid way too much goddamn money, that's probably true. At the end of the day, these guys get paid money to know, a little bit about a lot. So they can make good decisions around risk. So when we look at things like, ah, ah, you know, Deepwater Horizon. Or we look at things like Fukushima, you can look at situations where like, Fukushima, you didn't really understand the likelihood of the tsunami, you didn't understand the impact it would have when you built it on the side of a goddamn cliff wall in the ocean. You know, there's things -- they're frickin' fuel tanks, reserve fuel tanks were washed out to sea. Right? Like the engines were still there. They couldn't have run the cooling system all of the diesel ended up somewhere in the Pacific. You know? That's not hard to model. They're basically big bobbers with diesel fuel in them, right? Like the floats. Diesel fuel floats on water, right? Question mark. So even when it full it floats and when it is half empty it is super floats. Man, it is like a boat, you can put an ore on it and go across. These are not things you want to put near the ocean to keep the nuclear power plant secure, safe. No, bad understand. Conceptually, that's not hard for us to think about, but somehow, the people that built Fukushima Daiichi was like that makes total sense. They misunderstood a few of these nouns and it had catastrophic impact in Japan. So let's imagine a vulnerability, if you will, that requires you to man in the middle, the victim, ah, every single, single attack, or every single TCP session needs to be man in the middle. And even after you man in the middle attack I have to spend a lot of time post processing the connection in order to decrypt what was inside of it to see if there's anything inside of it that was of value. Okay? How would we rate this? Sir? Oh, five minutes. Yes. Six. Six minutes. Excellent. I always get a spare minute. That's -- the DEF CON good mantra from Bruce. I would say, if I must actively man in the middle you, that is a very unlikely thing to occur. Right? Of all the connections that I have ever committed, on the Internet in the last 20 years, I would say very, very, very, very, very, very, very few of them have been man in the middles. Right? The vast majority of that, it's kind of hard to man in the middle people especially at scale. Like they're places, like I don't know, like the DEF CON network where maybe you could do it, but for the most part, most enterprise networks it's pretty hard. They may do it on purpose, but for outside attackers it's quite difficult. And then you have to do it for every TCP session. You have to do, an amount of processing, and I don't even know what I got until I've done the decryption and the post processing and spend all this time. So there may have been a credit card number, or there may have been kitties. We have no idea what was inside of that. SSL everywhere means that I have to decrypted everything in order to find a credit card numbers, if I had to spend $100 for every TCP session to find your credit card number I'm going to be a poor goddamn crime member. So how would you rate this? Do you agree with this rating? Very low, like incredibly unlikely that this is he happen? Anyone know which one this was? Close. It was freak. Same damn thing. This is freak. This is the freak vulnerability. Right? This made the news, you all read about freak? Actively man in the middle and spend money to decrypt it, but oh, my god, export crypto is bad, so we stored it and CVS as a medium, as a 4.3. Somewhere decidedly in the middle. Right? Anyone ever executed a freak attack? Anyone been compromised by a freak attack. Excellent, why the fuck do we care? Right? Seriously! Like this is -- I think this is actually the one that caused me to like having the aneurysm and submit this talk. Like, this is total bullshit. Right? We care a lot about SSL. Sir? >> ( Speaker off microphone ). >> So yeah. There are situations where in the future maybe I'd be different. It's not different now, therefore I don't care. >> ( Speaker off microphone ). >> Look. I mean, we can argue, again, like when it is a 43, you can argue it should be a 42, should be a 41, should be a 45 -- when it's high, medium, low, very low, whatever, I can certainly like say, like we can all disagree, this is like low to very low. Okay. Cool, sweet, doesn't even make the cut, doesn't hit my NESS scans. Fuck that, no. [ Laughter ] So let's talk about PCI for a second. And so this is the other things -- there's a huge amount of requirement for PCI levied upon organizations to be PCI compliant, right? PCI is all about managing right with credit cards, right? Well, hey, oh! So -- >> ( Speaker off microphone ). >> As it turns out Heartland, Target, Home Depot, you know, totally PCI compliant, they must not have got owned, because they were managing risk appropriately. No, they were checking boxes appropriately. Someone else had measured risk years ago and said -- PCI, this looks like something that we should do to protect credit cards. There wasn't a lot of fraud at the time that we developed PCI. We didn't really have a lot of art, and so they develop a checklist that people had to stick through, through -- hell and high water, goddamn it. You got to have the right key size because we don't want nation states compromising your credit cards. Like go crack me a 1024-byte key, bring it on, I'll wait. [ Laughter ] Right. So anyway. Um, this, but, but -- we get inundated with PCI violations all the fucking time. Right? You have, we -- key sized this, no. I don't really for the stuff that I care about! 1024 is fine! Like, oh, my god! It drives me nuts. People will actually complain. So for the ShmooCon website, we have people send us like here is the PCI checklist and I'm going to report you to the PCI council. First of all, when you report people to the PCI council do you know what happens? SHHHHH -- nothing! Like -- [ Laughter ] Not a goddamn thing. Secondary, show me the threat actor! Show me the actual risk for me not having a strong enough key size to clear, I don't know, like 500 credit card transactions a year! It doesn't exist! Fuck you! [ Laughter ] Wow! All right. So it's like, ah, time to wrap up. So um -- [ Laughter ] I have a day job, or I did, I may be applying for a new one, so -- [ Laughter ] Um, so what. So let's, let's real quick operationalize it. When you want to operationalize risk, we want to go through a process, again, 830 I think it is a reasonable place to start. Here's a complicated box, lots of boxes and lines. I will spare you. The first step, don't worry about all this shit, um -- [ Laughter ] It's not important. Right? So -- [ Laughter ] I'm simplifying your risk management program right now! Okay? Just do step two. And throw the horns. How are you doing? Fine! [ Laughter ] Um, identify theft sources and events. Pick a threat modeling process. Use mine, use, show stacks from Microsoft, use something, but use a structured process to try to get coverage of the threat landscape. There's one thing I haven't talked about and I feel kind of bad, but it's important to not use your own personal bias, to think about what's "bad," right? You need coverage of the landscape in order to understand what's actually bad and not what your own bias is because you got owned or some other company got owned in some way in the past. Formal processes like what show stack has published, give you coverage of the threat landscape and allow you to appropriately left model outside of your own body of knowledge. That's very important. So that you're system isn't protected about only the things you care about, but the things that are real, okay? The same comes when you're going through and identifying vulnerabilities. There are formalized processes to perform architectural assessment, code review, pen testing, all of that kind of stuff that get you the coverage, that's what annoys the shit out of me about pen testing, right? There's not a lot of coverage. I found holes and I got in, and I'm a fucking rock star -- but all I did was some theatrics about getting in and it doesn't help you understand the breath of the risk in your organization. Sorry to break it to you, but you need to do more than pen testing to identify vulnerabilities. And then, finally, use some common sense to determine how to address it. How likely it is to occur? What's the state-of-the-art of the attack space? And then go figure out how bad it would be and then determine the risk associated with it. This is common sense. And a bunch of boxes from NIST. And it turns out this is what most organizations need to do to manage risk appropriately. That's fricking it. When you look at the breath problem, like, how wide is wide? The NIST CSF core is actually pretty cool because it's like all things security in an Excel spreadsheet. Right? Identify protected, detect, respond and recover, and they have little subdomains underneath it. That's like what we do. Your job fits into that bucket somewhere. That's the coverage that you need to have when you think about risk. So I know I'm out of time, so I'm going to -- do I have time for questions or -- one? No. No. Time for -- yes! I did -- no time for questions. Um, in general, um, you need to -- wow, I got the -- I got one minute left so I'm going to wrap it up. [ Laughter ] Um -- >> ( Speaker off microphone ). >> Oh, it does not mean one minute. So in general I do want to encourage you, this has been a little ranty and I apologize, but I, I get fired up about things like this because when I see us as a community making bad decisions, when -- the roadmap of success is super clear to me, like, people have gone down this road, we don't need to reinvent it. Go, please, leverage previous processes, leverage prior art, and make better things happen. Don't be part of the problem. Be part of the solution. Okay. That's it, got to go, bye. [ Event concluded ] "This text is being provided in a rough draft format. Communication access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings."