>>DEFCON, I missed you. Wow. I mean, so, show of hands, how many people are here for their absolute very first DEF CON? Thank you! You know, I have been doing DEF CON for a long time, this is about my 15th year, and I've watched it grow up. I've watched it grow from a tiny little thing at the Alexis Park to we've kind of over filled the Rio and the one thing that amazes me is just like wow, people are just here having incredible amounts of fun and joy, and did you guys see this I hack pineapples guy? I don't know who you are but you're my freaking hero. Basically this guy's like, you're going to come to DEF CON and you're going to hijack the wi-fi? Yeah, I'm going to take that crappy piece of gear you bought and I'm going to break it and if you're elite you can fix it and if not, go get a refund. So. Awesome. So what am I here to talk to you about? So I have a little bit of a tradition when it comes to my DEF CON talk. I never actually tell people what I'm going to talk about. Sorry, I don't know what I'm going to talk about. The subjects of my talk is a decent random number of generators. So, my talks are weird for DEF CON because you know, we're hackers, we like to break things. But that's not really just what I do here. I like to show what's possible. I like to show hey, we can scan the entire internet in a series of seconds. We can bust people who are doing violations of network neutrality. There's a few funny things we can do over DNS. Actually it was great, I once had like Darth Vader dancing on stage video streaming over DNS. Somebody told me you couldn't get high bandwidth over DNS. Somebody was wrong. But this is not the same thing as just showing what can be broken. So, let's talk about why, what we're actually going to talk about today because we're going to be here for a few hours and you should have some idea that maybe there will be something you are interested in. We have a bunch of subjects. We are actually going to talk about why random number generation keeps getting things owned year after year. I have a quote which is, I would like to not be fixing the same bugs 15 years from now. I'd like to be fixing new bugs but that would require dealing with the existing stuff. I want to talk about how we're sort of expecting users to recognize these huge strings of hex and it's bullshit. I want to talk about browser roday. There's a ton of vulnerabilities that are undiscovered and they all kind of follow the same form and we have to start dealing with them. I want to talk about D DOS. We really are getting distributed denial of service attacks that are scaling at a level where really it's an open question if the internet's going to keep working. So how much did the NSA shit the bed anyway? But before any of that I want to talk about why hackers are useful and I want to do that by discussing of all things, hard drives. So let me give you a scenario. An attacker has compromised a system and put something malicious on the hard drive. How bad could it be? What could possibly go wrong? Well, you know they might have put in an auto launching game in, you boot up your machine, it's got a root kit. You might have replaced some core operating system files. Maybe they got really, really clever and they like hit the master boot records. The first thing the computer does right when it turns on is loads some piece of malware and in all of these cases it sucks but at least you can like format the hard drive. The bad guy's gone. They're destroyed, right? Well, a funny thing happens when you start researching hard drives. So, little bit of a joke. What is the difference between a hard drive and a brick? The answer to this question is, given sufficient research, there is no difference. Never in my life have I done more expensive research. Oops there goes 75 bucks, oops there it's gone again. See here's the deal. In theory a hard drive is a thing you put data on. Bits go in, bits come out, the bits stay the same. In reality, we have what I like to call exhibits iron law of computer architecture. Yo, dog I heard you like computers so I put a computer in your computer so you can compute while you compute. [Applause] Now, what is great about this iron law is that the more you understand about computer architecture the truer it gets. Pretty much everything that runs automatically is another little computer that is running on its own cycle that is going and doing stuff. And of course, it's doing it fast. It's doing it trusted. It's doing it insecurely. The reality is, is that everything in your computer has become very generic. Your iPhone has 7 arm chips. The biggest lie about a computer is that it's just one, okay? Every device you have ever owned is a small network of mutually trusting overly mutually trusting devices. The hard drive is really just another computer with direct access to your system memory via specially designed physical and logical protocols. Now maybe you doubt me. Maybe you think, oh come on Dan, you might be exaggerating. So let me actually show you what actually happens when you know, log into your hard drive. I did this project with Travis Goodspeed, probably the single most entertaining hardware hacker in the world. He's the guy behind POF GTFO, proof of concept or get the fuck out. And so we sit down one day. We buy a huge stack of hard drives and it turns out on the back of them there's a serial port. What happens when you go into the serial port? Well you know, there's like an entire shell. It has documentation. It actually tells you what your commands are. Oh really, edit processer memory and here are my arguments that I need to know for that. Sweet. You can get some internal drive meta data. Like there's no hacking here, it's like a straight up interface, right. You can go ahead and you can like learn about your main disk, you know public one. There's a second one. There's an entire system partition. You can go ahead and scrape through that too. Read the data back out, actually take a look at the data. You know when a hard drive is started -initialized they run a bunch of tests. This is a C gate hard drive. It's actually C gate's test output. My favorite part of course being being WTF status. Sweet. Now you might be saying, but Dan you had to like hook up special pins to the hard drive, and what? An hacker is going to teleport to your home and do that? No, no, no. An attacker's going to go ahead and there's a great command for messing with hard drives called HD parm. We're going to run help on it. We're going grip from my favorite phrase in computer security, extremely dangerous. And you will notice nowhere does it ask for a password but it totally wants the firmware to replace the system partition. So I wasn't kidding about making bricks. I actually have my lab notes from this and it looks like - - a wake. It's like this hard drive: fate, platters exposed. Fingerprinted and I think we actually have like our fingers on the platters from where dumped is the 011 wrong. Predicted by Dan when he went to the buffer, problably repairable. Ripped by Dan with HD parm. Ripped by Dan when he set the serial port to 62500. You look at these things funny and they blow up. I tell you it's the most effective security technology of the hard drives. So this eventually became a paper with a somewhat predictable subject. Implementation and implications of a stealth hard drive back door. I'm kind of an unindicted co-conspirator on this. They did 10 months of work. They actually built malicious firmware that hides on the hard drive, corrupts the surrounding operating system, whether or not the drive's been formatted, it's owned forever. Kind of nice. Why do I tell you all of this? There is something which I have been discussing privately for some time called Storage EXOR Execution which basically says from an architectural standpoint, if you have code you are running that you know is malicious, that you are completely aware is exposed to skilled attackers, at minimum separate the piece of the system that remembers, that persists, that stores from the piece of the system that computes, that operates, that parses. So sure you can go ahead and provide some malicious input and own the machine. That might happen. But you can't keep your ownership without storing it somewhere where it might get discovered. When you actually spend time with attackers, one of the things they tell you is it's not about ownage. It's about continued ownage. It's about coming back in a year and your root kit is still ready to go. You don't have to break your way back in. Ok, you can't even start discussing storage X or execution if you don't have the paranoia of realizing all of these places to persist your attacks. You know, I'm one of these rare people in information security that is happy we are talking about APT, about advanced persisting threats. People say how can we be discussing this, these attacks aren't advanced? I'm like holy hell we have people admitting we have vulnerabilities we have no idea what to do about. Great. Stuff's not working. Let's at least talk about that. We aren't even having it as a basic part of discussion that maybe when you use resources in a cloud the underlying hardware should only be shared with other people in your cloud, like in your organization. Perhaps there should not be one piece of hardware for two different customers. Two customers, one hard - anyway. Works about as well as you think. This, this is the value of hacker engineering. See what I like about hacking is that we have no delusions okay? We are willing to see what systems are actually capable of from first principles. In fact you say something's impossible and we get to work. We are needed. Let me tell you, there's a lot of people out there who want to modify the internet who have no idea how it works. At least we have a fighting chance. You know what? I gotta say I like hacking all the things but I also like all of the things. Let me show you something very near and dear to my heart. This is a cat in a shark costume on a Roomba chasing a duck. Fuck yeah! The internet has changed things. I get this cat. There's no Bob Sagget, there's no America's Funniest Home Videos, no whiffle ball hitting some kid in the nuts. There's just the kitty ok, and that, that is awesome. Ok. One of my favorite artists she goes on television, she's told your music sounds like rapping stringle. She goes yes, why there's a market for Dub Step. I should play violin while playing Dub Step. She gets a half billion views on YouTube. She's a big deal now. And yeah, she does it while playing songs from Skyrim and Halo and Zelda. This appeals to my interests, ok. Lindsay [indiscernible] has rocked it, ok. We live in a world that sucks less because of the internet. I believe in e-mail over, like I have never in my life received an interoffice memo and I hope I never do. I prefer Skype over plain old telephone service. I prefer online banking over having to sit for hours and hours in line at a bank. The internet has changed how things work. We could lose it. Like, there's a level of compromise out there when we don't get all these nice things and there are some people who even prefer that. You gotta remember, the internet was not the first time we tried to create the internet. It's very matrix like that. This is like the 8th attempt. Who here knows about prodigy or minitel or AOL? Like we tried this a bunch of times. AOL spent like a billion dollars on modems one year. Turns out it was a good investment, they're still making 160 million a year off that stuff. Grandma loves her dial up. The reason the internet worked is because it was ours, okay? This was the playground of nerds. This wasn't biz dudes who made something for biz reasons. This was nerds who were like, I've got to send me some e-mail. And the key is, my God it was cheap. No one on the internet ever said that you had to pay some portion of your revenues. People complain about paying for DNS names. What is the ratio between how much money Google makes and how much they pay for Google.com? I don't think there's a smaller number. [Laughing] 1 over Google. Contrast that with the Apple situation. Oh you would like to be on our device? 30% gross please. YOu know it's a very, very different model. And the free internet has disrupted so much, and there are those who would like to disrupt it back. So here's the challenge. We're the guys who actually know how this damn thing works. We are the people who know that breaking everything is possible. We don't know that fixing everything is impossible. Now, you know technically it's a bit of a game but technically correct is the best kind of correct. So why do I do this? Why am I here? Because I see openness to the possibilities. I see that there's sheer joy in this community for understanding how things work. I see people willing to call bullshit on everything especially the parts of security theory that are just clearly wrong because what? You think everyone else in engineering can screw up and we in security are infallible? We screw things up all the time. We just have way more attitude about it. So, that's kind of my, whoa, why did everything just go dark? Get back here. Come back. I believe in you. All right. So that's the grand world vision, kitties on Roombas. Let's nerd out a bit. I want to talk to you guys about random number generators. The generation of random numbers, is indeed too important to be left to chance. What does it actually mean to need a random number generator? Many processes require a generator of unpredictable numbers. That means if you get the last thousand, you can't figure out what the next one is going to be, you can't go back and figure out what some of the older ones were that you missed. Now, just because we require a strong random number generator doesn't mean we're actually getting strong random number generators. Here's a couple of various talks, How I met your girlfriend, yeah that's a great Sammy cam car. Yeah there's many web applications are failing professional management, expert information for Python, unexplained information from Ruby. What the hell is going on? Well the deal is that all of these various web frameworks don't actually log into them that frequently. One of my friends was an admin at a major e-commerce site. He's like yeah -- wait, wait we have an interruption. >> Excuse me. So as you all know we have a tradition here at DEF CON with new speakers. We have a new speaker with us this year his name is Kamen Skow, Kamen Skow. I think that's the right way to pronounce this. >> No, I think you messed it up. Try again. >> Kami Cow? Kami Cow. Yah, it's Kami Cow. Anyway. >> Come here. It's good to be back. >> Yah, it's good to see you. >> Cheers. >> Cheers. Magnificent bastards! >> New speakers and really fucking old speakers. >> Oh snap. >> It overflows back to zero. [Laughing] >> Actually I missed a year, so. So the deal is this. You have literally one of the largest e-commerce sites in the world. >> We found a phone. Does anybody -- >> I'd say drink, but I just did. Yes you've got one of the largest e-commerce sites in the world and you would be like how many log-ins per second do you process? You would think it's like tens of thousands or thousands. Hundreds? They're like hundreds? Yeah. Seven. Seven times a second we have to accept a password. And it's surprising but when you actually think about what happens you put in your password, you get a cookie, that is now password equivalent, that's the thing that's shoved into the web app every single time you visit. That value ideally is a random value. Man, it would suck if you could go ahead log in a thousand times then predict the next thousand log-in magic numbers. Yeah but that's actually totally what's happening. More recently, a few days ago - awesome attack. One of the - mmm, it was so beautiful. A guy by the name of Dominique Bongard reversed the ID on twitter. Came up with an attack against a scheme called WPS. WPS exists so that you can relatively efficiently negotiate with an access point and find out what the pass rate is, you can log in and use the internet. And they made up this incredibly ill advised protocol for it. I mean just like, like impressively. But the one part of the protocol that wasn't actually awful was this piece where at the beginning you have some random value, you have some value and it's encrypted with 128 bit key. It's randomly generated but it's encrypted and you know, breaking 128 bit key is not going to happen. Like that's a 20 to the 128 work effort. That's going to take until the end of the universe. However it requires you to actually have 128 bits of entropy which none of the devices did. Device one had what's called an LFSR, a linear feedback shift register. They keep showing up in this space because they're the quickest and easiest way to get this stream of numbers. You start with a seed value and it keeps rolling and rolling and rolling. Well the seed value in the first device was 32 bits long so you try all 32 bits in combination. Second one another 32 bits. The third one just encrypted with zero. God. Beautiful attack. So, let's talk about why really we keep getting owned by random number generators. And the most important thing to realize is there's a huge amount of discussion in the cryptographic world about the absolute right way to build a cryptographically secure random number generator and for the love of God, most of it doesn't matter. No one's getting owned because they use Shaw 1 or Shaw 256 or hell, MD4. No one's getting owned because they use hash instead of HMACDBRG. They're not getting owned because they used the same entropy for too long, they didn't mix in new entropy. We're not getting owned because we had one pool instead of 32 pools. Like there's a disconnect between what's getting us owned and what we argue about. The fundamental truth is there's like a thousand different ways to build a cryptographically secure pseudo random number generator and they all have an incredibly high security margin. That is the fundamental truth. When you actually look at the systems in the field that are breaking and they're all breaking, your two problems are one, there's no entropy at all and two we're not even using cryptographically secure pseudo random number generators. We're using LFSRs. We're using crap. This is what's actually happening. So let's talk about the no entropy problem. What does a cryptographically secure pseudo random number generator do? It takes a little bit of entropy. 128 little values that are either 0 or 1, that have no particular pattern. It takes that little tiny chunk and spreads it out into number after number after number, gigabytes, hundreds of gigabytes, whatever. Your idea is from an attacker's standpoint you can look at as much of this output as possible. You learn nothing about the secret. Your best attack is unfortunately to guess what all 128 bits of that seed are and it's just not feasible. There's too many. Now, in order for this to be true you actually need 128 bits. What killed WPS is there actually wasn't. Generally there was 32 bits which is a meaning, which is a successful amount to break. One case there was 0. This isn't an obscure bug. Nadia Henninger did this incredibly beautiful attack a few years ago where she found one out of 200 RSA keys on the internet were actually badly generated. Now here's what's scary. If she finds that 1 out of 200 keys are identical, the method she used doesn't actually tell her if the keys are almost identical. If there's a single bit of difference, otherwise everything is the same, Nadia's method wouldn't be able to find it. So, unfortunately we have this really ugly situation where 1 out of 200 is a floor and really it's something like 1 out of 50 keys on the internet is bad. That's an incredible failure rate. Shouldn't Dev random and Dev U Random have prevented this? Because these keys are actually being generated out of what are supposed to be well engineered kernel systems. Let's talk about why we actually really do like kernel random number generators. First off when you have hardware in the machine - in your computer, remember, the biggest lie of a computer is it's just one computer. When one of the other pieces on the computers talks to your CPU, literally what happens is a piece of the kernel is run. An interrupt is fired, the CPU jumps to a certain point. You get a time stamp at that. You now have an interaction of someone else's clock with yours. You ever wonder what it means to actually collect random information? What it means to collect entropy? Like what are you doing? At the end of the day what you're doing is you are measuring a slow system with a fast system. You know I can snap my fingers and I think I have a rhythm going. You think I have a rhythm going accurate to the nanosecond? No I don't because that's not how accurate human systems are. We are at best accurate to the hundredth of a second. When you have a computer that says I'm going to watch your mouth's movements, I'm going to watch interkey stroke timings you basically have a situation where a CPU that is running at billions and billions of cycles a second is measuring a system that's running at hundreds of times a second. And that's what's happening in this kernel situation. The kernel is seeing an event come in and it's hoping it's coming from some other computer that's running slower or at least out of its control. Another great thing about kernel RNGs is that there's just one kernel, at least ideally. So you don't have some situation where one process is doing something right and one process is doing something wrong. Every process benefits from whatever randomness is available. It's shared. Finally, this is tricky stuff and one of the whole points of an operating system is to take tricky stuff and put it into a box that smart people build it and it's done right. So that was the idea. What went wrong? Well all those devices that were going ahead and generating these random keys, all these devices were started up. There was nobody at the keyboard. There was nobody at the mouse. There was no hard drive to be getting friction from air. There were no events coming in from the hardware. It was just sitting there. And if you have got no events you've got no entropy to operate on. Now, it would be nice if CPUs actually had hardware random number generation but for whatever reason that just consistantly seems to be outside the purview of our modern CPU manufacturers. You just can't figure out how to do it. So how do you solve this problem? Like really? The truth is you don't have to be perfect. You could fail every once in a while and you're still going to beat the 1 9 out of reliability that we got out of RSA. That really sucks. You have to force the synthesis of events. You have to actually force the situation where the device, whatever device that it actually is has something on a slower clock that's pinging the fast clock. And realistically what that means is using the realtime clock which tends to be a separate chip, tends to run at some amount of kilohertz and having that ping the CPU which tends to be running at some amount of hundreds of megahertz to gigahertz. This actually works. Now you can also go ahead and do some actions that take a hopefully non-deterministic amount of time. So reading values out of memory, causing contention for resources. You want to do what's called whitening. So that means you take any sequence of bits, if it's a 00 you throw it out. If it's a 11 you throw it out. The 0 to 1 transition is considered a 0. The 1 to 0 transition is considered a 1. This is what's called debiasing your data. I released a piece of code called DOCORAND about two years ago. DOCORAND was based on truerand from Matt Blaze from about 10 years ago. I think it was like 1996. If we had followed Matt Blaze's guidance in 1996 we would not have a single 9 of reliability for RSA in you know, 2012. That wouldn't have happened. Part of security is accepting maybe there is some weird ass environment where Docorand fails but is it going to fail as much as status quo? No. No it's not. So let's talk about the larger issue. Even worse than the fact that we don't have true entropy to seed this cryptographically secure pseudo random number generator is the fact that we're not actually using the CSPRNGs at all. When you look at the languages that people use to write software, JavaScript has math random. LFSO. Ruby has ruby rand. LFSR. Java has Java util random. PHPrand. Glib C. It's all crap. Why? Why are we doing this? Well, what ends up happening is every time someone complains about the nature of the random number generator that we have we end up making secure random. It's not fixing the old broken stuff. We make some new API. Then we make it weird. It's never just like math dot secure random. It's like bar A equals nu, into ray 1 window crypto get random values. What? That wasn't what I was using when it was easy. Why are you making secure not easy and not default? Does that work anywhere else? No. So why not be secure by default? Why not instead of requiring some weird ass random API, why not take the broken APIs and make them not broken? So thus, we've actually been doing that. Ryan Casaluchi works with me over at White Ops and what we've been doing is systematically going through each and every one of the popular web programming environments and we're taking their crappy little math randoms and making them a shell for DEVU random. So we've gotten JavaScript for node in the browsers. We've gotten ruby. We've made C's rand function actually Rand. PHP, Python, Open JDK. It's not like it's that hard to write this code. We actually just need to write it and get it something that people agree hey, maybe we should just have an easy to use path here. Now, there's gonna be some run time features support like being able to interrogate how good is this random number generator? Do we support fixed seeds? Do we support high speed? So I'm going to do something now, which I love, which people tend to call when I play chess against myself. Let's talk about what's wrong with what we've named Liburandy. Besides the name. >> [indiscernible] >> Ryan actually named it and I'm like sorry can't think of this code as any other name now. There are two reasons why generally you wouldn't use an approach like this. The first reason is testers tend to like predictable test results. So they tend to like situations where they can say I want it to be a random stream but I want it to be the same random stream so the system will always behave in a predictable way that can be regression tested. Now, I actually never ever like this approach and here's why. People say it's a two line change. Why do you need to go ahead and retest everything for a two line change? And here's the answer. It's because there's a bunch of bugs that don't show up in the field for - not because there isn't a bug, but because the house of cards fell in just the same precise way so that when the memory corruption occurred, nothing bad visibly happened. And so when the house of cards falls the same way every time you never discover the bug. And you only discover it when you do that two line change that shouldn't break anything. But now the house of cards falls like this and now everything blows up. I really don't like these sort of fixed seeds because they allow the ugly bugs to stay unfound unless you go ahead and do like a complete massive retest. Which now have to do every time because you keep getting burnt. It sucks. But people want it so okay we'll go ahead and do a special mode for Liburandy that allows you to say no, no, no I intentionally want my randomness to suck. Okay fine we'll set an environment flag. More seriously U random is actually slower than LFSRs. So you do have a situation where it's 4 megabytes a second out of U random and 260 megabytes a second out of your crappy bad randomness. And when you actually look at the - so I was interested. I believe in being historical. Let's look at the historical record and see why did these APIs get screwed up like this in the first place? And at least in the browser case they realized they had a bad random number generator but they also had benchmarks. And they really didn't want to lose at the benchmarks. It's like well we're not secure but we're not being benchmarked for security. Fuck. So even at 4 megabytes a second, it's still like a million random numbers a second. It's generally fast enough for everything, but not always. So this brings up the question should we be using a user space cryptographically secure pseudo random number generator? Now it doesn't mean that we don't use U Random, we still mine it as frequently as we might like. If it happens to get some new bits of randomness from the kernel, great. Let's use them, let's integrate them in. But like I said a few minutes ago, pretty much every variant of a CSPRNG is still secure. So one of the fastest hashes out there is what's called siphash. It comes from our buddy DJB. If you just siphash seed with and incremented counter this actually works. It's 20 times faster than DEvu random on Linux. It's competitive with then non-cryptographically secure pseudo random number generators. I guess what I'm trying to tell you is there's no advantage to sucking. You can be fast and secure. It's okay. Now, there is a possible improvement that we could do and that is the integration of time. Specifially it was called Clock monotonic. Now while you can't get, for whatever mysterious reasons, NSA, a good hardware random number generator, you can at least get a time stamp. And it turns outs you just need bits that are different. We've got this problem with what is called forking where you have a process. You make a copy of it. The copy has the same seed and the same counter value. Now you have the same entropy happening twice, and it sucks. If you integrate time, it turns out empirically it's a pain in the butt to get two things to happen at exactly the same CPU nanosecond. It's not that it can't, it's that it won't. That's a good thing. So you could actually have on a per query to the random interface basis you could actually say give me this time stamped view of the randomness. So my preferred CSPRNG looks something like siphash and the seeker, counter the previous output and what I call the shifted time. So you don't just do the time directly, you shift it by some amount so the attacker can't just say, well I know according to NTP that this value is going to be almost zero. It forced there to be some fixed offset. I'm not doing this yet in Liburandy and here's why. Reasonable people can disagree about whether CSPRNG should be in user space and kernel space but we all hate LFSRs so let's at least do something which causes those damn things to die. Another change is maybe we don't use siphash. It turns out because of quirks you can do faster with Blake. You can really do faster with Skind. So both of them are alternate hashing systems. We've found things that are twice as fast. So really the right answer at the end of the day might not be do a user space CSPRNG, it may be, fix up the really weird one inside of Linux. Which is obsessed with, you know mixing stuff in multiples, etc., etc. The problems we don't have. So, that's kind of the soup to nuts on what's going on with entropy generation. But there's another problem around representation. Apparently you are supposed to do something with D3BO7384D1NGJU42MA1$JH-89 and whatever the hell that is squiggly stuff. What the hell is that? Stop puking bits. We are lying to ourselves if we think users can do anything with that. They are lost. The problem is, is that computers actually needs humans to recognize and to remember large amounts of bits and it's hopeless okay? We have hardware acceleration for human faces and even then we maybe can get about 16 bits. There may be, if we're absolutely at the edge of our capability, able to differentiate 65,000 people. It's not that much and computers want more, so much more, 24 to 128 bits. They want recognition. Have you ever seen this before? And they want repetition. Hey, give me that long ass password. And the standard stuff doesn't work. We did not evolve to deal with hex or base 64 or base 58 or squiggly SSHR. It's just not how we're built. Now I've looked at this problem before. And in some work I call it cryptoneumonics. I went ahead and took 512 male names. 1,024 female names. 8,192 last names. I represented entropy by married couples. I'm sorry totally heteronormative but actually really diverse. Julio and [indiscernible] and Manuel and Twila and Bessie. Definite range of people here. This actually works as a way to get yourself to be used to some pattern of bits. You do have to show it every single time you interact with something. You can't just show the entropy representation when there's a problem. It's like you only see someone's face when there's a problem, otherwise they're in a mass. It doesn't work that way. That's not how the brain works. It needs spaced repetition to remember. Now, can we do better? Humans do have memory capacity. It's just not for arbitrary bits. Like Homer's epics are enormous. There's kilobytes of entropy in there. The deal is, is that we remember objects, we remember stories, we remember narratives far more than we remember arbitrary ones and zeros. So if you can represent ones and zeros as these sort of narratives that humans are built to remember, you actually have something. Heck maybe you could have a password and have it spell check. So what we've been exploring has using triads of adjectives, nouns and verbs and having these be selected to be maximally distant from each other and to encode whatever entropy someone needs to recognize or repeat in these stories. Now I haven't been doing this work alone, I have been working with my co-worker Ryan Casaluci over at White Ops. And, it's his code. It's his work. He's the one who should be talking about it so he's going to come on stage and we're going to talk about how he did this. [Applause] >> So Ryan, pick up a mic. >> Hi guys. How's is going? >> So, I just have to say I've known Ryan for many, many years. He recently started working with me at White Ops. I've never had a guest come up in the middle of my talk and there's no one I would rather have be here. >> Thank you. >> So Ryan, tell me what we're looking at here. >> Ok. So we have up on the slide a randomly generated sample. >> It's like 5e, 4D. It's like a bunch of bits. >> Right. There's a couple algorithms that are combined together to generate the encoding we have there. As you can see there's 4 groups of three words. Each one is a adjective, noun and verb. >> This is like macho, acid, answering. Rustic, cable, fetching. >> Right. And we have a known amount of entropy there. So what we can do, spell check for passwords, well pass phrases. So unfortunately we don't have a live demo to - I could not get the code to work fast enough for that. I will fix that soon. The code will be released. It is awesome. >> But we actually do have sort of have a view of what's going on here. >> Yes. That is - it can fix far worse than that. But as you can see transpositions, deleted spaces, replaced characters, added letters, deleted letters, random garbage symbols, it can fix all of this. >> And the words are out of order, even. >> Yes. It is also order insensitive because people will commonly mix up the order of words when they're trying to remember something like this and we can fix it, so we should. >> Interesting. So you don't have to remember if the acid is macho or rustic you just have to know acid and macho and rustic in whatever order. >> Exactly. >> Ok. So let's talk about how you actually did this. >> Ok. It's kind of complicated. I have code implemented in JavaScript and Python. We will, it's on Rehub, the post private name, currently private. That will be changed soon. So we have two major components. There's a combinatic or combinatorial number system encoder. That is an obscure but really neat trick where you can take an arbitrary number and encode it as a series of non-repeating symbols. So, as the example shows if you have multiple A's, you know, that's not going to work because you can't use the same symbol more than once. >> This is like you have an encoding of possible letters from A to Z. You could have, BADFGC. >> You are picking N of them where N might be 3 or 4 or 5 depending upon how large a range of numbers you want to encode. Ah, and this is fairly efficient. This is, you can do this fast. You don't have to walk through all possible combinations. You can generate a specific numbered combination and that's your encoding. >> And so you're actually able to take arbitrary bits of input and map them to the scheme that says it's these numbers and because the numbers are the combinatorial systems they can show up in any order. >> They can show up in any order. Yes. >> OK. >> For decode we just have to sort them. >> All right. So talking about the decode you're not having it be raw numbers, you're having it be words. >> Right. So there's another step here which is our word mapper. Our word mapper has a dictionary file. In the implementation we were showing in the last slide we actually have three dictionaries. One for nouns, one for adjectives, one for verbs. And we actually include three values. One with X nouns, one with X verbs and one with X adjectives and interlace them, but it gets a little messy. Anyway. So our dictionary has our words in it and variant forms of the words because you know, it makes things easier for people. So for the verbs for example, doesn't matter if you say forbidding or forbidden or forbade or whatever, they're all forms of the same verb. We know that. We canonicallize them and in coding we just take the canonical form which in this case would be forbidding. So. >> So we're basically taking all of the common errors of password input and we're saying those common errors no longer count. We're going to correct for them before we even let them get into the decoder. >> Exactly. And I've got some crazy Python code using something I think called node box, which is a linguistic suite that we use to generate all of the verb forms. And then we have a recursive algorithm that goes through and maximizes the edit distance between words. If you're not familiar with edit distance, I don't remember the specific one I'm using but each character insertion, deletion, alteration or transposition of adjacent characters counts as an edit. So we want to have at least two edits between any words possible so that a single typo cannot possibly result in a valid but wrong word. >> Nice. Basically computational linguistics as applied to security. Basically a path out of using lead speak as a security technology. There's no more of this like, I used an at sign. I'm secure. >> We know exactly how well that works and if you don't, go talk to the core logic guys. They have a contest that I think might change your mind. >> Nice. So it seems prett cool. You think this is going to let us have passwords generated server side? >> That's the idea. We generate, not necessarily server side but computer generated pass phrases that there's a reasonable hope of people understanding and remembering. >> Yah. It always seemed to me like victim shaming. We make the user generate the password so we can blame them when their password sucks, meanwhile we have no idea how to build them one. >> Yes. I remember when a few months ago when we first started working on this you know, Dan came up with some examples and I think there was, I still remember most of what we had. We had a flat monkey staring and we had a green house jumping. And there was another couple which I have forgotten but the point is I have looked at these maybe twice ever and I still remember them. >> So we were talking last night about what the implications are for allowing more key stretching. How do you explain what key stretching is and why this makes it a more feasible thing? >> Oh yes, this is really cool. So key stretching is the idea where in addition to randomness in the pass phrase as a sorce of difficulty to crack it, we also just make the hashing slow as hell. You've probably all heard of B crypt or PBKF2 or S crypt. Yes, thank you. And this is somewhat limited because if you're spinning for five seconds and it takes you three times to get your password right, you are going to just throw the computer through the window. But if we have spell correct, even if you make a few small typos it works the first time. It's fantastic and people generally aren't going to mind nearly so much waiting for their log in to complete as they are waiting to see if they typed their password correctly, and if not having to re-enter it. I had a high security thing I built a while ago. It would spin the CPU for 60 seconds and it was a threshold scheme so we had like five of us having to type passwords and when somebody mistyped it, it sucked. But if you knew it was going to be right one minute wouldn't even be that bad for disk encryption. >> So we're basically dealing with a common failure mode of what would otherwise be a useful and effective security technology. >> Exactly. >> The more effective a security technology works the more we can use it. >> Exactly. >> Cool. All right Ryan. Thank you very much. I really appreciate you building this out. >> Thank you. >> [indiscernible] >> Do we have a drink? Someone bring a drink. All right. >> I will be back, I have somewhere to go. >> So, there's a major use case to be considered with a lot of this stuff. Speaking of work from a previous talk, a few years ago I did something called FIDELIOUS, and FIDELIOUS is basically a trick that allows you to use a pasts phrase, any string of characters and use that to seed any a-symmetric system. So you can have a pass phrase for an SSH key, SSL certificate. Basically what I did was say a pass phrase will seed a pseudo random number generator and a random stream of bits is the input to all key generators. So you hook the two together, you get what's called a password authenticated key exchange scheme. It always works. This was relatively obscure two years ago. This is becoming straight up main stream now. There's a scheme called brain wallet that allows you to have a word or phrase going to a bit coin address. There's a scheme called mini lock that has a word or phrase going to you know, the rough equivalent of a PGP key. Not actually PGP, but still. These systems are incredibly vulnerable to having a low amount of true entropy. Because by definition you're publishing your public key. It's almost like it's called a public key. So this has been a straight up expensive problem in brain wallet. Tens of thousands of dollars worth of bit coin have been stolen. What we need is a way for people to remember more entropy. The idea is, is that storing all of these bits on a hard drive ends up being a screw up. People want something they can hold in their head. If we have these sorts of representation schemes that are self-correcting, people can remember more. Stuff will work better. So, the general idea, brain wallet and mini lock need human memorable entropy. Actual security needs more entropy. This story bit system increases memorable entropy. Now there is some interesting variations on the scheme. You can go ahead and have what we call split mode where you have a normal password you use, it allows things to work but it takes like two hours to crack a key. And then you have a secondary password that's really there to allow for key exchange. So moving a key from one location to another. So that can be set to take a few seconds while the slow mode, the one that's really in your head, that one might take hours and hours. The general idea is that yeah, it might take you hours and hours but the relative slow down to the attacker that's trying to brute force your keys from the public key, that guy is screwed. Because he has to run an attempt only one every few hours, a couple of attempts a day. He can wait until the end of time, he's not going to get your money. Two more tricks that we can do. These aren't necessarily good ideas but we're hackers. Good ideas are ahh, not exactly required here. It is an argument that no matter how you slice it, at minimum it's easier to remember 24 bits than to remember 80 to 128 bits. So there has long been a desire on the part of software developers who are presenting a key for memory to have, here's the full fingerprint which we understand you can't remember, and here is a shrunk down version of it. Here's the PGP key ID. Here's the RSA ID. Here's some smaller amount of bits. And the problem is from the attacker's standpoint they're like great, I don't need to collide with the big thing, I just need to match this 24 bits. It will take me a little bit of math but it won't be that bad. So here's the scheme that you can do. If the user can remember even a small pass phrase that is never submitted to the server, that pass phrase can be a sort of shrinker or a bit limiter. So you take the full hash, you hash it with the pass phase, you truncate the results. So the attacker still has to match all 128 bits. Doesn't know the pass phrase. Doesn't know what the 24 bits are that he actually does need to collide with. So this is going ahead and creating a differential between the work effort of the bad guy and the work effort of the good guy. You want things to be easier for the good guy. He has a cost. Your legitimate user has to remember something. But the idea is 10 people who are trying to remember the same key, each have their own personal recognition appearance, like they see the same face but for themselves every time they see someone's face it's the same face. But 10 people are seeing 10 different faces. Doesn't matter. They need to see the same face for themselves. That makes sense. There's a way of doing this in the opposite direction called local stretching. There is a problem where you have some web server - you have some password server. It's not particularly doing any work to make it so when somebody compromises the password store, because you know it happens a lot, that the attacker can't go ahead and figure out, ah-ha I got the password store, now I've got the plain text password. What you could do is have the stretching, the complexifier of the password, acutally happen in the client. It's like, yah the user's remembering ABC123 but it's being expanded out a billion times before it's ever sent to the server. Where things get messy in having basically software generate the pass phrase, the one that is the actual password for a server, is every freaking website has its own rules for what password has to look like. Oh, it has to have symbols. It must not have symbols. It has to have 3 numbers, 2 numbers, 8 numbers. By the way, going from a scheme that has to have a letter to a scheme that has to have a number, you know there are more letters than numbers. Uou kinda just made things worse. What you can do is kind of a cute trick and that is you can examine the pattern of the pass phrase that was provided by the user. So the user says, ABC123 with a symbol and that could be interpreted as, requires 3 numbers and a symbol, and that is actually what is stretched or expanded out. So it's inferred discovery of password policies. Kind of a neat trick. So, now we've gone ahead and we've gone into a pretty amount of detail on how we generate random numbers and we've also talked about how we represent arbitrary data. Now let's talk about some [inaudible] okay? The other shoe was kind of dropped. IE had more attacks in 2013 than Java, and you know how much we love making fun of Java. How is this possible? IE, Microsoft has been working to secure IE for a decade. How is it that things have fallen so far? So, excuse me for one second. I have to get some water in me. Here's the deal, guys. Anyone remember way way back in the day when Microsoft was just like, you know Internet Explorer is just Windows, like they can't be separated. Are you crazy? Everyone thought that they were lying. They were full of it. When you actually look at Internet Explorer that damn thing is Windows the remix featuring the internet, ok. It really is Windows. You have this object model that was built back in the 90s called com and they're like hell, let's go put that on the internet. It's like if you put mem copy on the internet. Like hey, we've got this API here and it takes bytes from somewhere and puts it somewhere else. What? You want it secure now? We barely have this working. And don't worry, because of the nature of browsers, because they're all implementing the same standards, everybody ended up with something that looked pretty much exactly like com. Like for example, XPcom from Firefox. It's not exactly subtle. I mean it is literally was is called an interface description language literally built into the very standards of the web. This is how it would have to work. Yes attackers loved attacking plug ins, because it didn't matter if you were attacking an IE or Firefox or Chrome it was still Java, write once, own everywhere. You didn't have to customize it at all. You even got to escape all these weird little sandboxes that weren't worth messing around with. And so, you know, as a, as has finally come out. My TOR deanonymization has a first name. It's Firefox-o-day, as the gruck might sing. Vulnerabilities in the browsers have taken over vulnerabilities in their plugins. And that's because we've been exposing the underlying object model to something that it was never really designed to be exposed to. The bad guys. Browsers in their normal operation are constantly allocating memory and associating a type with it. This pointer is to a table. This pointer is to an image. Now, if you're going to be constantly allocating memory, you've got to free it at some point, right? You've got to reuse that memory. So what happens is a pointer starts out, and it's pointing to a table. And eventually the system says user's moved on to a new web page, I don't need that table anymore. So that pointer is freed. And then later it says, ahh I've got space, let's go make an image. Go put an image there. Problem is you've looked at JavaScript, there's a million ways to have a pointer to a pointer to a pointer. This table has an image inside of it. This image has a CSS thing inside of it which has a link back to another image. There's a billion ways for objects in HTML and JavaScript to point to another. And if you ever get the system screwing up, getting it to free memory and still has some pointer still pointing to it, you now have two object contexts, one pointer. And that also works about as well as you'd hope. So use that for free. That's when you have two different contexts in one. Use that for free it's something like 90 to 98% of the undiscovered vulnerabilites in browsers today. And it's a real problem and it's not going away. Except that it kind of is. Google and Microsoft are actually doing some pretty serious things to deal with use after free. The Google solution is a typed keep. So what they're saying is, we're going to keep all the tables together. We're going to keep all the images together so when you go back and make a second allocation, everything is with its peers. You don't get some situation where you use a method of an image on a table. You use a method of a table on an image because everything is sticking together. And this actually makes it incredibly harder to go ahead and exploit vulnerabilities. Microsoft as well is doing stuff. They're doing a lot of things. But the fundamental advantage of their approach is what's called non-deterministic fuzzing, freeing. They're making it so the attacker never quite knows if they go ahead and pull their stunt it's going to work. It might work. It could work, but they don't know it will. This is great. This is attacking the unique needs of an exploit. This is counter exploitation. Sure you have probablistic likelihood but your requirement, your deed is guaranteed reliability. So Microsoft is playing that game. I've got a trick. Trick's called Iron Heap, named by Rob Graham and the idea is really simple. You can't use after free if you just don't free memory. Nope, just not going to do it. There's a little bit of a problem with this, you might know. We don't actually have an infinite amount of physical memory, but you know on 64 bit machines we have an awfully large amount of virtual memory. Ahhh. So the basic idea is we never reuse an allocation. Your virtual memory space is 64 bits long, and it is basically mapped. Say this chunk of the address space points to this page and this chunk points to that page. You keep reusing physical memory but the pointers, the stuff that's actually exposed to user space, the addresses that are actually exploitable by the various use after free techniques, they're never reused because you just have so much room. Look we've only had 64 bit browsers really for a short amount of time. This is the best possible use of 64 bit space. So the idea is that OX1234123412341234, once it's freed, that's it. It never points again to an image or a CSS script or a JavaScript. That page stays unmapped forever. Exploit that. So, we've got to do some stuff to make this efficient. You can only have a finite number of active allocations. The reason this can work at all is because there's actually hardware acceleration. It's what's called a memory management unit that very quickly allows a process to say hey, I'm accessing this virtual fake address. Make it really quick for me to go ahead and figure out what page of memory has the real content. Make it really quick for me to find out there is no page. You can't have an infinite amount on these mappings but you don't need an infinite amount of these mappings. You need to have like the top 10,000 or the top 15,000. Anything else can swap. Guess what? Machines swap. Maybe to disk, maybe to more RAM but the idea is you handle the case where the 0.01% of the time. The other trick you can of course do is what is called having guard pages. So that means you have unallocated space between allocations. This gets a little fat because now you have to have 8 kilobytes used every time even if you have an 8 byte allocation, and that sort of sucks. But again, you can do this probablistically. You don't always have to go ahead and make an exploit work you just have to make it a nightmare enough so that the attacker never really knows if their stuff's going to work or not. That is cheating. Good. We get to cheat. We're defenders. Implementation wise. This can be done entirely in user space. So, you manage memory in user space, you handle the exception when it's unmapped. You populate it in the exception. It can be run into kernel because kernels are already managing page tables. I don't object to doing this in the kernel. I don't object to kernel modules. I'm willing to say the operating system needs to adapt to the new needs of software. If I don't -- right now to figure out what permissions are on a page in Linux, you can process a frickin proc file out of the text file system. We can be more efficient than that. Down sides of iron heap. For one, there's no code yet, just some proof of concepts. And two, it really does lean on 64 bit and browsers, particularly Flash, aren't entirely fantastic at 64 bit yet. This is true and it matters. People like their freaking Flash. So what may work today is a piece of software that got really popular, I don't want to say popular, but really was being developed around 2007, 2008. Piece of code called Die Hard. It's an interesting secure allocator from Emory Berger of U Mass. It was an advanced ASLR implementation with out of band heat metadata. It's a long way of saying when you allocate memory it goes to hard to predict pointers, you don't really know where it's going to be and you don't have obvious things for you to corrupt. Expresses free, very similar to the Microsoft allocators. So you don't know when the memory is actually freed. It is eventually, but you know Internet Explorer actually has a command that says run the garbage collector now. Like, that's not so good. The big thing about Die Hard is that they're available today. There's code for Windows. There's code for Mac. There's code for Linux. Probably worth exploring for tails. There's something different about this stuff. So when we have normal ASLR for like loading libraries, when you have what's called NX which means you can't execute code that you've thrown on to the stack somewhere or heap somewhere, a lot of other defenses assume the corruption has occurred and say, now that it's occurred let's make it really tough to figure out what to do. What the end game is that leads to arbitrary code execution. Yes you've corrupted but you don't have a path. And this class of defense is different because you're actually preventing the initial corrupting moment. If you don't have the free complete, the use after free can never start. There is nothing going on in info sect that is killing more oday than use after free killers. You cannot imagine how many undiscovered vones are under this space. I still remember the first time I wrote a fuzzer. They found this stuff. Pow. Come to work every morning it's like oh look at all these, like dozens. Can I just say, by the way, Whenever people who like actually write fuzzers hear things like, and they used four odays. It's like dude I wrote 4 odays at lunch! Really like you get the pattern down, and quantity. So here's the plan okay? IE's got a story around use after free. Chrome's got a story around use after free. Firefox has got some ODAY. Firefox needs a story around memory hardening. So it's gonna take some time and it's going to be hard to convince them to give up their beautiful memory allocator genalic but we're going to put together an actual strong secure allocator for production Firefox in the field today. We're going to start by building this for the tour browser combinations for the tails environment. We're going to get into these indistinguishable performance-wise from real native Firefox. It's not going to be me alone. I'm doing this as a White Ops project and I've actually managed to recruit Dr Emory Berger, the creator of Die Hard. We're going to go ahead and we're going to get Firefox a heck of a lot safer, but that's what we're up to. [Applause] Dr. Berger actually came on board about 80 minutes ago so I'm really excited to be announcing this. I'm like dude you want to join in on this? He's like hell yeah! So that's what we're going to do about browsers. Now I'm going to talk about DDOS. We're almost there. A little bit left. Everyone having fun here? I know I'm nerding out like crazy, but. [Applause] >> All right. So, we're going to talk about DDOS here because the bad guys are filling up our pipes. Series of tubes getting clogged here. We had all of 100 gigabit flood in 2013. We've had like 114 in 2014 okay? Like the bad guys are making a mess of things. I had assumed that DDOS had just turned into botnets. You know you've got millions of desk tops, you fire them all from web site, they all have full stacks, there's millions of machines, like life sucks. It turns out the biggest and ugliest of the DDOSs are actually something else. They're not using desktop machines. They're using DNS servers. That's my protocols, damn it. Using NTP servers. They're lying about their source address. They're pinging some machine on the internet. They're causing those machines to reply to some poor target and it is filling everything up. And I want to take a minute to explain why this variation is quite so damaging. See here's the deal. I am a terrible artist. But I've tried to draw the internet. So what we've got here is we've got some source and then you know, if you're going to go down one network path you'll take the left fork and the other one you'll take the right fork. It spreads out and eventually as you get closer and closer to the destination there are fewer and fewer paths and finally you're at your destination. Does that make sense? Depending on where you go you have a fan out and then a fan in. Now if you are flooding a single destination all your traffic takes pretty much one route and goes down down, up up, up down and that's the one route 99.9% of your traffic is going to take. So the most bandwidth you can consume is the least bandwidth of any of these hops. But now let's say we're going to reflect off of machines all around the internet. What happens is, is that your traffic fans out in all possible directions, hits all of these various name servers, each of which has their own fastest route back to some poor network. And I don't know if you can see it but the entire damn network is red. Boom. And this is what's going on. And it's ugly and it's not going away. So what are we going to do? Going to lose our toy? I don't want to lose my toy. I like my toy. I like my kitties on Roombas damn it. So what are we going to do? Ideally there's this thing called PCP38. Ideally there's a technology called PRPF. Ideally networks couldn't lie. So you can't be in the middle of San Francisco and be like, hey, I'm Porsche Lub in London and I want this DNS reply. Tee-hee-hee. Yeah. I'm not saying that this is a bad idea. I'm just saying there's a heck of a lot of networks out there that don't give a crap about filtering routes. The technology, I'm not saying it doesn't scale but it totally doesn't scale. Effectively the technology works by dropping traffic. You know what ISPs don't like doing? Dropping traffic. It fails once and they're like I am never turning this back on again. I'm not saying we shouldn't strive to be PCP38 compliant. Some crazy paper came out a while ago. They found at least 2300 networks that don't have it. You only get the real advantage of PCP38 when it's almost entirely universal. Now, it's been a tremendous success for cable modems, for DSL lines, even for a bunch of corporate environments. But the really big pipes that are really able to do huge reflection attacks, they're not getting secured any time soon. At least not if we can't bust them a hell of a lot quicker. So I want to bust them. Here's what we're going to do. We're going to bring back an old idea called stochastic tracing. I did not come up with this idea, maybe I've refined it a bit. First heard this idea about 12 or 13 years ago. And the idea is you've got a router and one out of maybe a million packets it says, hi, I'm this router and I routed some traffic to you. It doesn't do anything for your standard normal flows but if you've a router being made to move billions and billions of packets you are going to end up with a nice stream of hundreds and hundreds of tracers and these things actually incrementally improve the internet over time because there's more and more traces and it's harder for an attacker to know whether or not their spoof traffic it going to be quickly detected back to them. As it happens, routers have most of the technology necessary to already do this. It's very simple to configure a router. What you do is what is called a GRE stream to say please send me one out of every million packets. That can go to an individual damen. We don't need to patch the router, for gods sake. Then go to a damen. The damen can then go ahead and send this trace packet. And what does it actually mean to send a trace? Because by the way, I don't want random people on the internet knowing all the traffic I'm receiving. I don't even want them knowing my DDOSs. So the idea is the traces go to the destination of the flows and maybe the source. There are only two IPs that are really supposed to be seeing traffic and it's the source and destination IP. So if you're spoofing traffic I don't think you have an expectation of privacy for the network to hide that you were lying about being the guy in London. What does the payload actually look like? That's a good question. Do we encapsulate in this in ICMP? Do we make and HTTP post? I don't know. All I know is if you're on the internet you're already receiving crap. I know this because I'm sending you crap. And you haven't crashed yet, I think. There's an alternate option. And this of course comes back to my own history in DNS. Sometimes it's a pain in the butt to hijack say, all the trace traffic that's coming out of ICNP to IP, or HTTP to an IP. That might actually be hard. There's a space called reverse DNS. 4.3.2.1.inadder. [indiscernible] can say my DNS name is blah blah.foo.com. with what's called a PTR record. You can actually host more than just PTR records inside of reverse DNS. So you can host something that actually says, yeah I know you're trying to do tracing for 1.2.3.4, here's the server that accepts those trace packets and it's still trust associated with the IP. It's very nice. Obviously we want to assign trace payloads otherwise you can spoof a whole bunch of things and be like, yeah I'm totally Verizon moving all this traffic over Verizon. So we've got to prevent that, and we can do that with EGT5519 keys. So that can work. The long-term vision is to reduce the time between a DDOS and tracing the involved networks. We've got to make this stuff happen faster. Ultimately what we need to do is we need to get to the point where DDOS isn't automatically suppressed. Where there can be firewall rules that are automatically applied. Now, this is rough because there's a lot of hops that are going to see enough to be able to request firewalling. So I don't know exactly how this works. But what I do know is that these floods are growing fast enough and nasty enough that we may actually get the greatly feared congestion collapse. There may actually be entire countries, entire regions of the globe that just have too much traffic for anything to work. And I think Paul basically would kill me if I didn't say, by the way you know we can like patch name servers that don't participate in this anymore called LRD. But unfortunately it requires so many hosts to upgrade versus this kind of stuff which is just incrementally improving. So let's talk about practically the last thing I wanted to discuss in this epcically long talk today. I don't know why I said I wanted to do a two hour talk. Am I trying to kill myself here? All right. I can't just not talk about the whole NSA thing. Can't just leave it to Bruce Snyder. I love Bruce. I'm worried. I'm worried about the long-term reaction to the NSA revelations. See, there are two ways that you can win in technology. You can have the better technology or you can have fud. Oh sure you could go with that competitor. They're cheaper, faster, nicer, more powerful but oogabooganationstate. This sounds ridiculous and do you realize there really are proposals to go ahead and keep all traffic within a country within a country. This really sure BGP says this route is faster, better, easier but the law says you have to keep within our borders where we can monitor it all. Thank you very much. The internet again was not the first time we tried to make the internet. It was the first one that worked at large scale because you didn't have this sort of political interference. This was a network made by nerds who had one thing they wanted which was make this work. Make this work as fast as possible. And a lot of these political actors have a different goal in mind. You know we didn't want the Chinese internet or the Iranian internet before Snowden and I hope we don't want that now. But I look out there and I look at the reactions to the NSA, and especially their manipulation of crypto standards and there really is only one thing I can think. You maniacs, you finally did it! You blew it all up! Damn you to hell! [Applause] Let me tell you the honest truth about cryptography. And if you don't know, the NSA pushed and got a broken crypto standard into - well, they wanted to have everyone use it but really the only screwed are the Feds. Um, oops! There's a reason we were asking the NSA's advice for cryptographic vulnerabilities. Most crypto functions do not need the NSA's help to suck. They get that for free. It takes us like 10 years to figure out that a cypher or a hash or whatever might actually be safe. And the NSA has actually been pretty good in saying something is wrong here. This thing in nes, it's not good. This thing in Shaw 0, it's not good. Exuse me, my throat's a little dry. You know, sucks. Wouldn't it be nice if we had like a department of the government that was dedicated to defense? Like a Department of Defense? They could have like a conference, like a DEF CON. What's happening is that even good advice is being treated with deep suspicion. This is like hey, yeah we standardized on this kesac algorithm and wow Shaw 3 is slower than Shaw 256 in performance patterns. And you know this is not a controversial opinion. Zuco of Tahoe LFS is like hey, this site is too slow in performance patterns. Community's like NIST, NIST is evil. NIST is doing more evil. They've got NSA pulling their strings and what's the end result? We are totally replacing NIST with GJB, with some guy. Now, I want to be clear, I'm an enormous GJB fan. I've been advocating curve 25519 for TLS replacing protocol for literally like half a decade but fewer trusted resources in a time of great need. Not actually helpful. There's that great quote, Linux scales. Linux does not scale. I have a graph and this graph explains the problem very clearly. The NSA has two missions. GJB, pretty sure only has one. This is what's going on here, okay? Like sometimes one of these guys finds a bug and says, cool. And it's not GJB. So we're all kind of waiting for the other shoe to drop, um, not implementers. If you've noticed pretty much everything new that comes out just uses GJB sec. Like his stuff has won. Everything new is you know, 25519 or bad ass or lipsodium. His stuff has won the day with implementers. There is what's called epistomology. There's, how do you know what you know? Academics aren't really allowed to know things without some degree of evidence. This is kind of a barrier to a lot of fud and B.S. This is kind of the heart of science. We do know that dual ECDRGPC is bad. We have hard evidence, we have math and so on. But there's another shoe and that shoe is what's called the NIST P curves. There's this great technology, it's called elliptical curves. Elliptical curves requires parameters. There are two major sets of parameters that came out of NIST. The P curves and the K curves. We know exactly how the K curves were constructed. You take this value that comes from a mathematical function. You put it in what is called, a nothing up your sleeve generator to show there's nothing up your sleeve. And now you have your parameters and everything is good. And then there's the P curves where that thing that's going into the nothing up your sleeve generator is C49E3608.... Well you know, every shirt has two sleeves. I'm not saying we know that the P curves are compromised. However, if you don't plan on the P curves being compromised you're an idiot and you should get fired. And in fact that's pretty much what all implementers have realized. One of probably the most interesting things about bit coin is a long time ago they said yeah this P curve stuff is garbage, let's go with the K curves. So this is something that everyone's sort of waiting for. It's not like we know it. But we know it. The larger issue, and it's not like crypto worked particularly well before the NSA caught revelations, so, ok. The hard problems in crypto actually aren't the math. They're not actually even the implementations, although DPG and GJB are god inside. Key management is the hard part because this is what actually touches users and organizations. We keep modeling cryptography as math implementation, but there's a complexity that goes unnoticed and that's operations, ok. Every project in the real world, who here works in IT? Okay? Tell me I'm wrong when I say every project needs to be measured only in terms of how many God damn meetings you need to do in order to complete it. That is the metric. When is the last time you saw meeting count measured in anything to do with cryptography? When is the last time you saw crypto work in the field? Not unrelated thing. We have to learn to respect usability. We have to learn that this is't the operational requirement. Over at Black Hat one of the most significant announcements in some time happened. Somehow Alex Stamos, one of the founders of Isac got root at Yahoo. I don't know how he did it, but he was like, you know what, I'm going to do end to end PGP for Yahoo fricking mail. Sweet! Because you know what, Google might understand user experience but Yahoo really, really understands it. So, summary of what we have talked about today, and we're almost done here. Don't let an attacker touch your storage. Let it store, let it execute but don't let it do both. Actively gather true entropy. Death to crptographically non secure NGs. Please, for the love of god, represent entropy in a way other than bit vomit. Browsers become vastly more secure when use after free is prevented and we are going to fix this in Firefox. The NSA crypto fallout continues, it hasn't gone away. There's been a wholesale movement to, while I'm a fan of the guy, he's a guy. He's a professor at UIC. Yahoo's PGP news is fantastic, one of the most significant things of the year, maybe of a couple years. And, just cause I can, just one more thing. I totally get to pool this. It's Apple related. Apple went ahead and put into their latest release nodes for IOS8, Safari now blocks ads from automatically redirecting from the app store without user interaction. If you still see the previous behavior or find legitimate redirection to the app store to be broken in some way, please file a bug. Yes! Ok, I am tired of browsing something on my phone and it's like, I know you're trying to browse this website, but what I really think you want to do is pay money to Kim Kardashian. No! That is not what I want to do, fuck those guys. Apple wants help, let's give it to them. Now I talked about this on Twitter before and I said, hey I'll give you 250 bucks. Screw that, I've got a company. I'll give 5 fricking grand for the first person to go give Apple some bugs they can fix. I want to see this dead off the mobile internet, ok. I've become a believer in bug bounties. This is a complete 180 on my old postition. I used to think that bug bounties would lead to crappy bugs and people being idiots and demanding money anyway, and it would just be cheaper to pay them off. You can't imagine how crappy bugs are that come in from the outside world sometimes. People have no idea how stuff works and they think they're geniuses. But I was wrong. I'm telling you this. I have seen the results first hand. If you want richer bugs from the outside world, maybe you should make bug finders richer. Turns out that when you actually have a sample of good bugs that you pay for, you can now be like, yah we paid for the good bug, your thing's crap. So, we're not just doing 250 bucks, we're going to give $5,000 to the first finder. $2500 to the second. $1,000 to maybe the next 5. If we get more, we might keep throwing more money at this. I don't know how many bugs there are but we believe in A, this sort of behavior is toxic. It destroys the mobile web. It is some of the worst stuff we see in advertising and I even run a company that fixes advertising. This is still offensive. And I think hackers can fix things, and I'm willing to put my money where my mouth is. So, thank you all. This has been a blast.