We have got a great talk to wrap this up for the first day. Justin is going to talk about secure messaging and this is a pretty important topic. I think everybody in this room generally understand the needs for this. I think in talking to them that this is going to kind of make the case for the people that we know at home, the muddles that we know so I have got a couple of secure messaging apps on my phone and right now they do a great job at secure messaging and giving me a list of all the hackers that I know and none of the muddles that I know so hopefully this will help us get that address book a little bit bigger, right? So let's give Justin a big hand. [Applause]. [CHEERS]. >> My name is Justin Engler. I am with NCC group and here to talk about secure messaging for normal people. This is not a talk for C.R.Y.P.T.O. geeks. The idea here is to try to get people who don't already know about this topic something to start with to spin up their knowledge and try to look at all the different apps that are available and make their own decisions about which ones to use. Before we really dive into this, I would like to take an informal poll, how many people in here are C.R.Y.P.T.O. geeks? Okay. The door is over there. Any journalists? A couple. Lawyers? Couple. Normal people who do not fit any of those categories? Oh, quite a few. All right. Good. So the goal of this talk is to lay out the foundations of what a C.R.Y.P.T.O. app or security messaging app does without getting into the really heavy sufficient that will scare away the Nubes. We're not going to cover any research, not going to be any math. Not going to do any cryptology or crypt analysis or talk about specific applications. I'm going to get to that in a second and we're not going to talk about any C.R.Y.P.T.O. things in really, really fine detail. So this is my slide to try to scare you C.R.Y.P.T.O. geeks out the door. What we are going to talk about, really basic C.R.Y.P.T.O. stuff often oversimplified and I'm kind of going for the 80% right just to try to make it so it doesn't get too crazy. That's a list of things in short of the things we're going to cover in the talk. The different types of threats that will go after the messages you will send. How you can defend yourself and how those threats will counter the defenses ha you put in. And then at the end we're going to talk about kind of a list of things that C.R.Y.P.T.O. apps say that they do but don't really do what they sound like. So my job with NCC is I'm an application penetration tester so my job is to break applications and as it turns out I have broken quite a few C.R.Y.P.T.O. messaging apps, both the kind that are like advertiser secure and also like a messaging app that is part of some other, you know, larger platform: I have got so many customers that all do this stuff I can't talk about any of the apps in specific. There is a couple of times where I'll talk about maybe a piece of software and talk about larger things like OS's or platforms and things like that but the actual messaging apps I'm not going to say anything about them. If you ask a question about it, I'm not going to answer. So that's where we're at with that. We have got one more piece before we dive into the deep stuff. I'm going to say the word "government" a lot. I don't necessarily mean this government or that government. If I have a specific government in mind, I will name it. So whenever I say "government," assume a government of some type. It's important one of the standard not doing anything wrong so why do I have to hide anything, there are a lot of people in the world who live under a system of government where they are being censored or they're being oppressed and these kinds of apps are useful for people in those areas to be able do what they need to do to try to get themselves to have a better life. So even if you think that you don't need these things, other people need these things. Furthermore, in the U.S. there is even legally things where you are allowed to keep secrets from the government. For example, attorney-client privilege is one that you're pretty much the Court say that you're supposed to keep this stuff away from them so this is another good counterargument for the whole you have...all right. So we're going to talk about messages. So for the purpose of this talk, a message is just when two people, in this case, Alice and Bob, want to send some sort of data from one person to the other. And we're not going to focus on like messages between two computers or system updates or anything like that. Really talking about Person A wants to communicate with Person B and until we get a little deeper the actual type of network that's involved doesn't matter. The devices that are being used doesn't matter. I don't care if it's a desk top, a phone, I don't care if it's a LAN line phone, a postcard going through the mail. Like they all follow this general pattern so if you don't encrypt anything, and they're eves droppers and they can read everything that you say in most cases and they can do that passively. So what that means is they don't have to do any extra work. They just kind of sit and record and everything that goes...they'll pick up and then they can analyze it. If there's no encryption, that means they get both the message content and the metadata and we'll talk a little bit about the difference between those two. Most of this talk will be talking about the content and then later we'll have a separate section about metadata. So the first question is which app should you use? I don't know which app you should use because I don't know who you are. What we really need to do is think about why you want to use these apps. Who you think is out to read your stuff and then use that to make a determination of what kinds of features you need in an app so that you can then make a good choice. So these are examples of people who might need to use a secure messaging app. So I fall into the first category...have things like financial data that are important. I also just think it's wrong for people to be listening in on my stuff so I try to encrypt what I can. Businesses obviously have a huge need to protect things like financial data, business plans, trade secret, that kind of thing. Activists will want to protect what they're doing from whatever government might be able to move against them. And that might not even be the government where they live. You can certainly imagine cases where an activist lives in country A but is doing something that Country B doesn't like and Country B is acting against that activist even though they don't live there. Harassed is one that was actually added fairly recently. I talked to someone who told me that she used encryption because she had a lot of online harassers and she was worried that if she didn't encrypt things they would be able to get ahole of that data and use it against her so that's why she used it. Often if they're dealing with sources who have important...they'll want to protect their communications between the...and lawyers as we talked about have good reason to protect their communication with their clients. So once we know who you are, then we can talk about the people who are out to listen for your stuff. So we kind of divided our threats into kind of two axes, an opportunistic attacker is interested in collecting as much about everybody. They don't have a particular person in mind. A targeted attacker is obviously the opposite. We know we are after Justin and so they are going to look at that person specifically. The other is your resources...attacker might be something like, you know, a single hacker or a group. Whereas high-resources attacker might be a large company or a government, things like that. So these different types of attackers have different means available and as we go through the different security methods and countermeasures against them, we'll try to kind of highlight who go against them. I have to cover just a tiny bit before we can cover anything else but this shouldn't be too painful. So for the rest of this talk if I say something that's encrypted, what I mean is it's impossible to read or modify if you don't have the key. In real life that's not necessarily true. There's all kinds of other things that could go wrong if something is supposedly encrypted but we're not going to cover any of that in this talk and we're still going to cover why things could go wrong if you send the unsecure message. So public key and private key is kind of a difficult concept because the naming isn't great. You can think of a public key as the blue prints for a lock. So I send you the blue prints for a lock. You can build the lock, lock something with it and then send it back to me and I have the key that can open it. No one else can open it. So in this way I could have, everyone would be able to send me something that no one else could read except for me. Signatures are kind of the inverse of that. You can use public and private keys. I can sign something and then you can use my public key to verify if I'm the one who actually document or...whatever it is. You can kind of think of signatures as the public key is like a signature sample that somebody wrote in the book and then you're comparing the signature that's on the new package or whatever, the check that was written to this existing signature to see if they match. Except it's all with math and so it can't be done wrong for the purposes of this talk. Fingerprints...people who are privacy sensitive and they start learning about cryptography and then people start asking them for their fingerprints and they get freaked out. When we're taking about fingerprints here what we mean is something that serves as a kind of shortened version of something else. Usually we use this because keys are really long and so we'll take a fingerprint of the key and then we can share that fingerprint with someone and they can know if they got the right key or not. Lastly the word "trust." that doesn't mean that I trust you to drive my car or get my laundry or anything like that. Trust in this case means I both am confident in your identity and I'm also willing to let you vouch for someone else's identity. This one's tricky because I might accidentally say the word "trust" and it won't be clear during the talk. So if it's not clear, somebody raise your hand and say what did you mean and I'll tell you. Okay. So transport layer security also known as TLS. The old version is SSL and you eel often here -- you'll often hear people use the terms interchangeably. But for this talk, they're the same. The problem with kind of the very first step of sending a message to someone else is you need to know how to get that message to them. The easiest way for that to happen is to have some server somewhere that both sides know about and then I can send a message to the server and then either the server will send it on to the right person or the other person will connect to the server too and get the message...so layer of security. That's one way we can try to secure this type of message as it goes across. So you can see that the eves droppers there -- eavesdropers there can't listen to this specific kind of traffic. The little lock in your browser, that's TLS. So a passive attack doesn't work against TLS because it's encrypted between the server and the party that's sending the message. So no eavesdropping. However, the kind of naive way to do TLS and a lot of people did this a long time ago. It's not as bad as it used to be. If you say yes I'll accept an encrypted connection and you don't bother to find out who you connected to, then the bad guy instead of just passively eavesdropping could pretend to be the server you intended to talk to. It's encrypted so you think it's fine. On the other side the attacker makes another encrypted connection to the real server and sends your data along but now she's seen it because it was only intended for the attacker and read the traffic, modify it, whatever. This is harder to do with the attack because you have to be there actively man in the middleing stuff to actually make it work but it's not that hard. In fact, it actually scales pretty well. So if any of you work at businesses that are fairly large, almost all of them do this. So network, they'll actually man in the middle your traffic, they have already installed the certificate that says oh yeah, you can trust this server. You don't talk to the real server. You talk with the middle server and it checks the security and passs it along. Governments can do this too. Again, harder than a passive attack, but scales well. So to solve this problem you need to verify that the server you're on is the server that you actually wanted to talk to. The TLS kind of...thing called certificate authorities that handle this problem and all your browsers do this by default already. If we were to go back to one of these guys, it is likely that at this point Bob would be getting a warning in his browser saying we don't think this connection is secure and then Bob would probably just click through it. But -- ( LAUGHTER ) >> If you're using an app like on your phone, hopefully the Authors of that is there is no way for the user to bypass that. If it's not a secure {KEBGS}, it just stops -- connection, it just stops. So let's explain how that works. So when you make a TLS connection to a server, that server sends you back a certificate. The certificate is essentially just a list of identifying information of what the server is and it's signed by certificate authority. Your browser or your operating system has a list of all the certificate authorities it trusts and so if this certificate was signed by the certificate authority, then you know that that certificate authority that you trust is vouching for the identity of the server. The problem here is that there are a whole bunch of certificate authorities and all of them can vouch for anyone. I looked at Firefox today, there were 90 different CAs in there and those are everything from like private entities in the U.S. Essentially businesses that do this as a business. All the way up to Hong Kong post office. There is a couple of other one that is are like clearly this is a...so the weird thing is that it's not likely that these types of attacks would be taken against you while you're doing online banking. But if it's something where a government is more interested in you specifically, they might find a way to forward your certificate for you and inject it into that stream to make a man in the middle against you. So there is a way around that too. It's called certificate pinning. So on the client side instead of just trusting any certificate as long as it was signed by somebody, you say I already know which certificate I'm expecting. No, I have talked to the server before or maybe this server is part of some app that I'm already using and that app knows which server it's supposed to talk to so we'll just mark those and we'll know that this certificate or this public key is the one we're supposed to talk to and then if something else comes up, it's just the same as if the man in the middle guy didn't have any kind of signature and it just fails. This is great because now instead of having to trust all 90 of those C arcs, you have {SHEUFPT}ed your trust risk. So now -- CAs, you have now shifted your trust risk. So now let's say Android...you get this C.R.Y.P.T.O. messaging app from Google play store, so now maybe Google could have modified it and sent it to you and the same with Apple if Apple sent it to you through their app store. But that's the only person you have to trust now, just Apple or Google and the app developer instead of all these other CAs and everyone else who has access. So back to TLS again...if we pretend that app is doing TLS totally correctly, pinning, validating and everything is going well, there's still a huge problem with TLS that makes it totally insecure for a lot of secure messaging applications and the problem is this: We're encrypting between Bob and the server...Alice and the server, but in between when it's on the server, it's not encrypted. It's totally in the open and that means that whoever runs the server or whoever can bring this to bear against the people...or whoever can hack that server, those people can all still read the clear text of the communications between them. Also you could have serve {SER}s, let's say that -- Serbers, let's say that -- servers, let's say that someone runs a messaging service because they can sell you targeted ad, this is another way to do that. Even though their communications is server encrypted, they would still be able to read all the stuff you're talking about so then they send you ads based on those things. Most kind of instant messaging or anything like that, e-mail, that has some security on it stops here. So because so many people who have interested in targeted...oftentimes if something is kind of advertised as a secure messaging thing, then it goes a little further. So let's talk about the next step in the process. What we'd really like to see is instead of the encryption going between Alice and the server and Bob and the server, we have encryption that Alice and Bob so now all the server sees is garbage as it goes through and they can't read anything. The server is still in the middle. They just can't read the text because all they see is encrypted text and they just have to pass it along. So in order to make this happen -- so before we had the CA system and we could download an app that had a certificate, so that's how we would get these two keys but now they need to have their own keys and they need to have ways to exchange their keys so that they can encrypt messages to each other. So the easiest way to do that is to ask the server. Alice wants to send a message to Bob. She doesn't have Bob's key. She says hey, server and server gives back the key and then she starts up the encrypted with Bob and sends the message. Anybody sees the problem here? So how about the server just says this is definitely Bob's key, but it's not. And gives back a key that the server knows and then again on the other side sends the wrong one back and now the server is the end point encryption instead of Bob and all the stuff can be read again. This is definitely tougher than the other man in the middle attack because now instead of just some random person in the middle it pretty much has to be that server that does it or someone who's hacked that server, et cetera. But it's still much harder. So to prevent this one, what we really need to do is, you know, in a similar fashion to how we talked about you need to verify the server you're talking to is really the server you wanted, now we need to determine if Alice can determine if it's really Bob's or not. So this process is called key validation and the idea is to prove the ownership of the key. Ideally whatever app you're using to send the messages will do this one time. So once Alice has sent one message to Bob and done the validation scheme, it's fine and you don't have to redo this tedious...and then until there's some event that happens that causes you to have to rekey. Couple different ways that we can make this happen. Kind of the simplest one is a Fermented bean paste -- she'll say the key he's got must be her key. This is really simple. Alice doesn't have to do any work. If any of you use SSH, kind of looks like this model. But the bad news is if the adversary that's trying to eavesdrop on you is already there for the first connection, then you just stored their key instead of the one you wanted. The other probably bigger problem here is that if Bob drops his phone in the toilet, then later on Alice tries to send a message and she gets this wrong key back, there's like no way to solve the problem now because she sees there is a key match or a key mismatch, she could ask him over the messaging thing, how, did you drop your toilet in the phone and the eavesdroper says yes, drop the phone in the toilet, you're kind of stuck at this point. So a little bit better. How to band validation. So here is the fingerprint we were talking about before. Keys are really long, so instead we're going to take a fingerprint of the key and share it over some medium besides whatever messaging app we're trying to secure right now. So you could make do that in person. You could do it over the phone. You could do it over SMS. You could use some sort of social media. Public key, anything you want that tries to get that without using the thing that's not secure because it's not secure yet. So one of the nice things is that it doesn't have to happen during the communications process. You could do it some other time and set up the keys and then later you'll know that that later communication is secure. If you do in-person verification, that is pretty good. If I already know -- Alice and Bob already know each other, you know, then it's pretty easy for them to just like show each other their keys or maybe your app has a way to like import keys via the...that's really tough to be able...also the fact that it could be ad hoc is nice. So if it was part of like a protocol, then the attacker might be able to see when it's going to happen and then get in the middle and so on. If I just randomly SMS my buddy and say hey here is my key, like the attacker would have to be waiting for that, be able to understand that my SMS is having a public key in it. Then they'd have to intercept it, change it, send it. It's hard. So the bad news is that you're limit today the security...so, for example, there are a lot of secure messaging apps that will do things like here is a number. Now read it over the phone to the other person. And if that thing is secure, then the thing on their screen will match what you just read and we're good. But that assumes that whoever your adversary is isn't able to fake you like reading numbers and that sounds hard but there's been some research in the past year that makes it sounds like it's doable. In addition if you're talking about voice over the internet, people are used to like weird, choppy audio and stuff, so there is a good chance that it might be believable even if the attacker has to synthesize fake voice for you. The other problem here obviously is that you have to have that second channel already there. If someone wants to talk to me over the internet, I have never talked to them before, I don't know them at all, like it's really hard to find some other way to validate their key. Another way to do this is to rely on the trust of others to build your own trust. So Alice wants to send the message to Bob. Alice doesn't know Bob's key. But Alice knows Carol. Carol knows Dave and Dave knows Bob. So now by kind of chaining those keys along as long as those people all can verify that, then Alice can build a trusted connection to Bobby getting the key. There is a couple of different ways transitive trust can work. The web of trust is the most common one. It's what you see in PGP. It's very...what the web of trust is very convenient, but essentially...that shows who knows who so you can automatically look it up which means that if someone's interested they could build this graph of everyone who knows -- you know, who knows who and start to do an analysis based on those things. Furthermore, let's go back and look at this here. So Frank there up there in the corner, he doesn't know anybody. So if Frank wants to send a message to Bob, there's no way for the web of trust to help him out because he has no connections so he'll have to use some other method like Tofu or something to boot strap his way into the web of trust. C.R.Y.P.T.O. parties are the usual way to do that. You go to a C.R.Y.P.T.O. party, meet a bunch of people. Hopefully those people know a bunch of people and now you're kind of hooked into the web. So the other slightly more private way to do this is a trust to introduction. We use the same graph we just did but instead of it being automated and look it up on the server, it happens organically where Alice who asks Carol who asks Frank who gives the information back so it's all ad hoc. That means that there's no like server that has the map of...speaking of metadata, let's talk a little bit about metadata. Everything we have talked about so far was pretty much message content so what did I actually write to the other person? Metadata is all the stuff that you can learn about a message without knowing the content. So who is the send sender, who is the recipient, what time was it sent, how big was the communication and what app was used to communicate you can also often find out and if you are -- this stuff alone is enough to get you in trouble. So if you are a high-level government...and metadata shows you communicating with a whistle blower journalist, like that alone is enough to get you in trouble even if they can't get you in trouble. So this stuff is important. And it's even worse if they see he's also using this encrypted app to do the communications, right? Metadata can be picked up in a few different places so that server that's in the middle that we have kind of been showing connections for you, that could easily be logging all this traffic, even if it's all end-to-end encrypted, they still know who they sent messages to and they know who they went to and all that stuff. They probably know the IP addresses that it came from. So if you did this from your house or phone, they might be able to tie that to an actual identity in the real world. A lot of messaging require phone number and makes it even more difficult to be anonymous. You can go buy a phone in cash and whatever, but that's actually still making it much easier to track you and finally a lot of these apps will ask you to upload their contact -- some will do it automatically. Some will ask you first. The idea is that it's going to pull up all your regular content and you can easily use this new app to send messages to all the people that you know but the server has gotten this new social graph of all the people you know even if you didn't send them a message. And furthermore, let's pretend that the server that you're using is Open Source. You don't have any way to prove that this is not doing all this stuff. Even if they say that they're...blah, blah, blah, you don't know if they're actually running on the server side so they could easily be logging when they're not supposed to be. Also they might say that they're going to take your contact list, take all the phone numbers that you know and hash them so no one can figure them out. Waiting for the last one from the C.R.Y.P.T.O. geeks that stayed because the space of available phone numbers is not really something that will work out. Could easily brute force that hash and -- no. That's not going to be enough. That's way outside of the scope of this conversation. Okay. So even if the server is kind of the good guys, then we still have to worry about everybody else trying to collect metadata so the ISP could be doing it. If the government has taps on various lines, they could be doing it. Again they're going to have the IP addresses of what connects. They might have the phone numbers or other phone identity or that kind of stuff and they could definitely force the servers involved to give it to them but they can also often infer what's going on so if you pretend that the server is not colluding with these other government attackers who are trying to steal your stuff, you end up with this weird situation where the, okay, Alice sends a message to Bob at midnight. So what's going to happen from the government eavesdroper's point of view is Alice sent a message to the server at midnight. Messaging server sent a message to Bob at midnight or maybe slightly after midnight. So by doing those kinds ever tricks they can infer even if they don't have access to the server. Another way you can do this is with size. So you might be able to tell that Alice and Bob are communicateing because you saw Bob send a package of a particular size and then the server happened to send out a package of that particular size out to Alice. So now we can assume that that's who was communicating and not that Bob was talking to Frank or Alice was talking to Carol or any of those things. So to do that inference, you can look at connections. You can look at timing. You can look at size. And you can look at existence of traffic so this is a traffic confirmation attack. If the app allows someone to send you a message, let's pretend the government has a van parked outside...they want to see if this alias that they have some fake name for is really you. They send a message of a certain size at a certain time to that alias and then they're watching your WiFi with their government van to see if that thing comes through that's the same size. If it does, then they can link those two things together. This is probably an okay time to talk about TOR. TOR will protect you from the first thing only. Instead of the government knowing that you have connected to this secure messaging server, instead they know you have connected to TOR. And there's really deep stuff about how many different TOR stuff you have to have in the puzzle to be able to do it. It's possible to do that in some cases. Not to mention the fact that in some cases just using TOR is evidence enough. So there was a kid who called in a bomb threat to Stanford...because he didn't want to take his exams. I think it was Stanford. It might have been an East Coast school. And he did this over the internet, used TOR and so now he thought he was safe because TOR was protecting his IP address so no one could see his IP address. He sent it from his campus dorm room so the IT department said oh, well, look, there's only like five people using TOR on this whole campus so let's ask all them and brought each one of them into the room and like you, you were the bomb threat person, right? So like just the fact that you're using some of these things is often evidence enough until we get to the point where everyone uses this stuff ubiquitously all the time, then we can give those kinds of people cover. Not necessarily bomb threat, but people who are using it for legitimate reasons. Another thing to talk about if we're going to talk about secure messaging is what happens when the device that has the app that does the messaging gets taken by your adversary, whoever that may be. If there were any logs on the app that show who you were talking to, what you were talking about, they're stored. The adversary gets all those things. If you had a contact list in the app, the adversary gets that and they can start using that to build metadata. They also get your keys most likely and if they get your key, then that means they can impersonate you to all of your friends. There is another kind of interesting wrinkle to what happens when your adversary steals your keys. So we talked before about if you're a passive attacker, you can't...encrypted data but you can still record it. You can just sit there and record all this encrypted garbage. Later on they find out who you are and they take your phone. If they have all this recorded data, they might be able to now use the keys off your phone and decrypt all that stuff they have been storing all this time. The way to prevent that is to have forward secrecy. There is a lot of C.R.Y.P.T.O. mumbo jumbo involved but the key is you stack a key on top of a key that you're already using that's temporary and since the attacker was passive and not active, they weren't able to man-in-the-middle the man and they won't be able to steal the key and sometimes called perfect forward secrecy and for this talk, the same thing. So...we sides all the things that we just -- besides all the things that we just talked ab, they're all important so you need to think about things like...use perfect forward secrecy. Does my app use an encryption? Does my app use some sort of key validation? Those things are all really very important but those are only a piece of the puzzle so we kind of waved away the C.R.Y.P.T.O. stuff and say yeah if we say encrypted it's secure, right? In the real world, that's not how it workings. There is a bunch of things that could go wrong, they could have other noncrypto-related vulnerabilities in the app where the attacker can take over the app and when things come in, they can read them as the app decrypts them. So if you really want to be confidence in the app that you're using, you need to have somebody audit the app. Since I know that all of you are normal people and not like C.R.Y.P.T.O. auditors, you probably can't do it yourself. Couple of options, you could use an Open Source app and then anyone can >> Kit it -- Audit it and tell you what they got. If you're using something that's closed source, then you're going to have to have somebody else Audit and look at the results. If someone else Audited but you can't see the results of what the Audit was, then that Audit wasn't of much use to you. Could have said everything is broken and all you see is it was Audited, right? So you need to have some assurance of not only was it tested by somebody but also that either it did well on the test or did poorly on the test but now that things are fixed and now there is a retest and that's fine. A lot of C.R.Y.P.T.O. geeks will tell you that you have to have Open Source or you cannot have a secure app. I am probably going to get beat up later but I'm going to say that's not true. Just because anyone could Audit an app doesn't mean that anyone is going to actually Audit the app. There are plenty of things that are Open Source that no one has time to look at. And because we're talking to normal people here, it's not like I can just tell you well, you need to Audit the app yourself, so you have to be using an app that has been Audited. Even if you assume that the app you're using is good, then you have to think about the OS. Is the OS you're using using Open Source? Because if not, who knows like backdoor that a C.R.Y.P.T.O. geek will tell you about a closed source app also apply to the OS now. So if you believe in -- also so iPhone has a little bit of Open Source in it. Android has more Open Source in it but both of them have a whole bunch of closed source that you can't Audit so you can't even say I'm using Android so there I'm Open Source. The OS is important. But even if you assumed that you had something that had no crazy binaries that no one could Audit...then you still also have the firmware on the phone which is going to be closed source, things that like run the radio on the phone, those are closed source and also things like automatically receive it and apply it on the phone network. So if you are thinking about attackers that have a lot of power, they could do a lot of things straight to that and even though the rest of your phone is fine, the rest of the it becomes irrelevant. Again, with the hardware as well. So just having your Open Source messaging program isn't enough by itself. Furthermore we have another problem. Let's pretend that somebody that you trust has an app, whether it's Open Sourceed or Closed Source and they -- whatever they said you're happy that this is an app that you want to use, so now where did row get the app? You didn't know that yourself, even if you did build it yourself, you didn't read all the source code and make sure it matched what was Audited, at least I have never met anyone who does that. You download it from somewebsite somewhere in did you get it from an app store in everyone one of those new sources is an attack against you because they could have added stuff to the code that could have all kinds of bad stuff in it that wasn't never Audited. This is a really hard problem to solve and there is not a good solution yet. What you need is something called a termistic build and he'll be able to get his report and say here is the hash. Guess I didn't talk about hash. Here is like a fingerprint of what that build that I audited was and later the website from the app store or whatever, you can verify that you get the same fingerprint that the auditor got and you know exactly what you're getting. We're not really there yet. We'll probably never be able to do that on IOS because of the way the system is. Android, we're almost there. You could do it manually. Maybe we'll have an automated way to do it and that would be nice. All right. Almost done. There is a whole bunch of things that crypt {O*BGDZ}o apps that tell you that they do that are not what they say. So Auto delete, there is a bunch of apps that have things like you send this message that has a time message and after that, no one can read it anymore. The problem is that the other person on the other side isn't necessarily using the same client you are. They could have modified their client or third-party client that still can receive these messages but doesn't follow the rules about when it's supposed to delete them. Once you send a message to someone, it's theirs. The worse case is a photo of their phone, right? Related to this any apps that say they notify you or prevent screen shots, same story. Someone could be using a third-party version of that app that doesn't do those things and could still receive the messages so they can take all the screen shots they want and you'll never know. This is a little bit of a deep topic for...but essentially a one-time pad is an unbreakable form of cryptography. That means that when you're using it in an app like this, it's unbreakable and that's not good. The problem is what makes a one-time pad good is that you have a long sequence of random stuff and use that to encode all of your data that you're going to send but you're never going to have a long enough thing to send all the things that you're going to want to send so then you're going to have to get more one-time pad data from somebody else and send it to somebody else and whatever you're using to get and send it, like now you have just collapsed all your security...however good those things are because you can't use one-time pad to send more one-time pad. Hardware C.R.Y.P.T.O., there is a few devices out there that like plug this into your phone and then it does magic C.R.Y.P.T.O. and the phone can't read it because you don't trust your phone. Might have been hacked. So the problem is the phone could instead just turn on its microphone and listen to what's being said, even though this other thing is sending encrypted data. There are a lot of apps that say you send a message and it will...within one mile or whatever. There is a bunch of different ones that do that. Based on what the client reports so if I was at home in Washington State I could have my client say I'm at DEFCON and then I would start getting all the messages of people who are at DEFCON. There's not anything that enforces that to happen. Instead of useing the server {KEBGS}, you're connecting other people nearby and sending data that way. These aren't anymore secure than any other app. They still need all the same C.R.Y.P.T.O. stuff on top of that mesh network because any adversary who's there listening could pick up that stuff just like anybody else can. Military grade is a fun one. There are a lot of things that advertise themselves as military grade. This usually means that we're talking about a specific type of C.R.Y.P.T.O. algorithm but that doesn't address any of the stuff we talked about in this talk at all. So even if it's military grade, all the things that we just talked about could still be totally wrong. A good way to think about military grade is to say this car is safe because it has a pull let proof windshield but it doesn't tell you anything else about the rest of the car. Generally if someone's using a secret magic C.R.Y.P.T.O. method that no one's heard before it's probably never tested which means it won't work as well. That's generally how it works. You want to use things that are well understood and broadly used. Multiple devices is a tough problem...so if you have got an iPad and computer and an iPhone and an Android device and up to use someone to send a message to you from their C.R.Y.P.T.O. app and you receive it on any of those devices, pretty tricky problem because now you have to like have devices sign each other's keys or you have to have multiple identities or the server has to manage it all and the server can add new devices that it really has turns out to be a really hard problem. All right. So even with an app that does everything right and solves all things that we talked about, you're still not going to be totally effective against all the different types of adversaries. The low-resource people...the high-resource opportunistic people, you can stop kind of bulk message collection so you can't read all of the data necessarily. But metadata is probably still on the table, very difficult to handle. And as per targeted high-resource people, you're never going to win against that by choosing the correct way to do things like go to spy school and learn trade craft and make sure they never steal your phone and, you know, use -- they could buy Odais that are against your phone and use them. Like, you're not going to win against this. So the choice of your C.R.Y.P.T.O. app is not going to solve the problem of a really powerful entity coming after you specifically. So what can you do? You need to understand who you're trying to secure things, who you're trying to secure them from. You need to understand the features that the apps are using. You need to decide if the app is doing the things that it says it's going to do. And you need to find a way to get that app in a secure manner. I can't tell you what the best thing for you to use is because that's something you have to decide for yourself. Valid -- key validation is probably the biggest -- if you can only take away one thing from this it's key validation. If you're using an app where you can't figure out how to do key validation, assume that it's using key validation and use it appropriately. EFF took a lot of flak for this but I think it was great to learn what the different apps can do so hopefully they're going to update it soon but that's a great starting point to looking at the secure apps out there...so that's all we have got. Thanks to Cara for the diagrams for the hacker guy and Tom, he's my guru guy back in the corner. If you have questions that's about deep math stuff, I'm not going to answer them, but he can. The white paper that covers most of this stuff is on your DEFCON CD. It will also be on the website and we'll be putting an updated one as well as the slides probably next week. [Applause]. >> All right. So I got one question. On your phone, what is it that you use for secure messaging? >> I can't answer that. I said at the very beginning I can't answer specific ones. I can't do it. >> Is there anything that you won't use that people might be tempted to use? >> If you are worried about real attackers, you need to use something at a bare minimum that does key validation. That's the best I tell you and there's a good list of different ones that are at least popular in this crowd that can all do key validation. I wished they would all get along and we wouldn't all have to have five different apps, but can't win. >> All right. Well, that was very good. That was very good. Let's give him a hand. [Applause]