00:00:00.033,00:00:03.136 >>’Kay welcome everyone. We’re gonna get started here um but 00:00:03.136,00:00:09.142 I’m gonna welcome up uh Radia Perlman for her speak. Thank you 00:00:09.142,00:00:14.781 [applause] >>Thank you [cheering] [applause] [laughing] 00:00:14.781,00:00:19.620 Okay so um ya this talk it's actually sort of not as funny as 00:00:19.620,00:00:23.190 my usual talks but if I managed to [crowd aw’s] get it um done 00:00:23.190,00:00:26.326 eh quickly enough I have some funny slides at the end. But at 00:00:26.326,00:00:29.997 any rate, um ya so I’m going to talk about how to build uh 00:00:29.997,00:00:33.867 systems where some a the components don’t just crash but 00:00:33.867,00:00:38.572 they start misbehaving. Um and the um I’ll give a few examples 00:00:38.572,00:00:41.341 [clears throat] the recent being that I’m not gonna give you a 00:00:41.341,00:00:44.478 cookbook that if you do A, B, and C you’ll have a system 00:00:44.478,00:00:48.348 that’s resilient even if you have malicious participants. Um 00:00:48.348,00:00:52.786 I’m giving you 3 different examples to show you the variety 00:00:52.786,00:00:56.156 of problems and the variety of solutions so it’s, there is not 00:00:56.156,00:00:59.559 cookbook it’s just that I want people to be aware of this 00:00:59.559,00:01:04.331 issue. So um the term for kind of not crashing but behaving 00:01:04.331,00:01:09.770 badly is called um a byzantine failure out of an ancient um uh 00:01:09.770,00:01:13.307 computer science problem uh known as the byzantine general’s 00:01:13.307,00:01:17.878 problem. Um so all sorts of things can be subverted by a 00:01:17.878,00:01:22.182 very small number of malicious participants and there’s this um 00:01:22.182,00:01:26.053 talk that you should Google ya the slides are um you all have 00:01:26.053,00:01:29.823 the slides so you can uh you don’t have to copy down the URL. 00:01:29.823,00:01:34.861 But um it was about trying to do crowdsourcing some sort of um uh 00:01:34.861,00:01:39.599 puzzle and there were thousands of participants and um 00:01:39.599,00:01:44.171 ultimately there was one bad guy that managed to completely um eh 00:01:44.171,00:01:48.475 eh you know they had to give up and that is astonishing. So read 00:01:48.475,00:01:54.348 that paper, it’s really cool. um but things um that work and all 00:01:54.348,00:01:57.985 my intuition says there’s no way they possibly could. It’s like 00:01:57.985,00:02:02.622 Wikipedia [giggles]how how ya know um it’s it oughta be just 00:02:02.622,00:02:07.627 full of people ranting about ya know whatever but um instead uh 00:02:07.627,00:02:12.833 is remarkably concise, well written, accurate um things and 00:02:12.833,00:02:16.770 Ebay too you know like uh you wanna buy something. You search, 00:02:16.770,00:02:20.707 you find it an eh eh you know from some vendor you never heard 00:02:20.707,00:02:23.744 of in a country you never heard of, you send them money and you 00:02:23.744,00:02:27.647 and you get the widget that you bought. It’s it’s amazing. Um so 00:02:27.647,00:02:32.819 I’ll talk really about 3 different examples um and um so 00:02:32.819,00:02:37.024 the first example I’m going to talk about is trust models for 00:02:37.024,00:02:41.561 PKI and I’ll explain what that means. So what’s a PKI? It’s 00:02:41.561,00:02:47.401 it’s a way for me to find out what your key is. Um so um I 00:02:47.401,00:02:52.272 want to talk about trust models where um dishonest or confused 00:02:52.272,00:02:57.010 CAs can’t damage the entire world. And so first I’m going to 00:02:57.010,00:03:00.781 talk about the 3 models that are currently deployed and why none 00:03:00.781,00:03:03.984 of those will work. And then I’ll talk about how I think it 00:03:03.984,00:03:07.154 should work which is actually quite simple and I have no idea 00:03:07.154,00:03:11.191 why they don’t do it. So um a quick review of public keys. 00:03:11.191,00:03:15.195 There’s this thing called a CA that signs a certificate which 00:03:15.195,00:03:20.600 is a little message saying um ya know Alice’s public key is this 00:03:20.600,00:03:25.205 number and and signs it. So when Alice wants to talk to Bob she 00:03:25.205,00:03:29.676 sends her certificate, Bob now knows her public key and Bob 00:03:29.676,00:03:33.246 sends his certificate and now they can authenticate and 00:03:33.246,00:03:37.217 encrypt and do all that good stuff. So what people tend to 00:03:37.217,00:03:41.555 think about is academics worry about the math um you know the 00:03:41.555,00:03:45.592 provable security of the math algorithms they use and, 00:03:45.592,00:03:49.329 standards bodies worries about the format of the certificates. 00:03:49.329,00:03:51.832 Both of those things are important but people don’t 00:03:51.832,00:03:55.268 really think about the trust model. It’s voila! Here’s a 00:03:55.268,00:03:59.973 format for certificates um and no that’s not the end of it. So 00:03:59.973,00:04:03.510 I’ll um uh I’ve given them sort of cute names these various 00:04:03.510,00:04:06.980 models. So the first model I’ll talk about I call the Monopoly 00:04:06.980,00:04:10.784 Model. where you choose one organization universally beloved 00:04:10.784,00:04:14.287 and trusted by absolutely everybody. You imbed their 00:04:14.287,00:04:20.060 public key in absolutely every um thing. Everybody um um has to 00:04:20.060,00:04:24.297 get certificates from them and it’s nice and simple. So Alice 00:04:24.297,00:04:29.302 is configured to trust Monopolis dot orgs uh public key, Bob 00:04:29.302,00:04:33.039 sends a certificate signed by Monopolis dot org and 00:04:33.039,00:04:36.309 everything’s fine but what’s wrong with it? Well there is no 00:04:36.309,00:04:40.213 thing as a universally trusted organization. There’s also the 00:04:40.213,00:04:43.917 problem of monopoly pricing because once the more widely 00:04:43.917,00:04:47.687 deplored it gets, the harder it is to convert so you’re willing 00:04:47.687,00:04:52.292 to pay sort of more money than to try to reconfigure everything 00:04:52.292,00:04:57.297 and that one organization can impersonate anybody. Uh um the 00:04:57.297,00:05:00.133 next model is kind of what’s in your browsers. I call it the 00:05:00.133,00:05:06.306 Oligarchy Model. Where um uh your configured with 100 or so 00:05:06.306,00:05:11.878 trusted public keys and um a server you talked to can present 00:05:11.878,00:05:16.583 to you a certificate signed by any of those and so it does 00:05:16.583,00:05:20.754 eliminate monopoly pricing but it’s uh so Alice is configured 00:05:20.754,00:05:26.092 to trust any of a large set of public keys for CA’s and Bob 00:05:26.092,00:05:31.531 sends a certificate signed by any of them and um what’s wrong 00:05:31.531,00:05:35.235 with it? Well it’s less secure because now there’s hundreds of 00:05:35.235,00:05:39.806 organizations that all you have to do is find a um an employee 00:05:39.806,00:05:43.610 that can be bribed or threatened or confused into issuing a key. 00:05:43.610,00:05:49.416 Um so now eh eh an important enhancement is um the ability to 00:05:49.416,00:05:52.719 have chains of certificates. So Bob does not need to have a 00:05:52.719,00:05:56.656 certificate signed by something Alice immediately knows but he 00:05:56.656,00:06:02.495 can present a cert uh uh chain. So um Alice is trusted to um 00:06:02.495,00:06:08.668 trusts X1 and Bob gives 3 certificates. One where X1 says 00:06:08.668,00:06:14.507 that alpha is X2’s key, uh another where um X2 says beta is 00:06:14.507,00:06:19.512 X3’s key and beta says um um that X3 is um uh eh ya uh eh ya 00:06:22.249,00:06:27.053 uh says that delta I guess is Bob’s key. So with these 3 00:06:27.053,00:06:33.660 certificates um Alice can um find his key. Um so the next 00:06:33.660,00:06:39.399 model I’ll talk about I call Anarchy. Um so nobody tells you 00:06:39.399,00:06:43.003 who to trust. Uh you just meet people, you believe they’re 00:06:43.003,00:06:46.106 trustworthy, you get their public key on a business card or 00:06:46.106,00:06:49.676 something, you configure it into your machine and every time you 00:06:49.676,00:06:53.446 have a gathering of nerds like I know at IETF meetings maybe you 00:06:53.446,00:06:57.584 have it here. You have a PTP key signing party with some sort of 00:06:57.584,00:07:01.021 ritual where you say who you are and what your key is and anyone 00:07:01.021,00:07:05.292 can sign certificates and then you have public data bases where 00:07:05.292,00:07:09.462 um if you’ve signed certificates you can donate your certificates 00:07:09.462,00:07:13.233 into the database and if you need to search for a key you can 00:07:13.233,00:07:16.536 look through these public databases to piece together a 00:07:16.536,00:07:20.740 chain to the name you want. So for instance, if Alice is 00:07:20.740,00:07:25.745 configured to trust those two keys um, and Bob’s um uh eh ya 00:07:27.881,00:07:30.116 know there’s might be a certificate in there some way 00:07:30.116,00:07:33.853 that says alpha is Bob’s key and maybe a different certificate 00:07:33.853,00:07:38.091 somewhere in this web of whatever that says beta is Bob’s 00:07:38.091,00:07:42.495 key. Somehow Alice has to piece together a chain through this 00:07:42.495,00:07:45.832 enormous data base and then how can she know whether it’s 00:07:45.832,00:07:49.035 actually trustworthy? So it’s the kind of thing that you can 00:07:49.035,00:07:53.139 deploy it and it seems great um as long as it’s in a small 00:07:53.139,00:07:56.710 community of all trustworthy people but once you have bad 00:07:56.710,00:08:01.114 guys who are going to pollute the data base with bad stuff at 00:08:01.114,00:08:04.484 uh um but to also just the sheer size of it with billions of 00:08:04.484,00:08:08.488 people um each one can sign 10 certificates. Just the sheer 00:08:08.488,00:08:12.959 size of it won’t work. So, now I’ll talk about oh and bu um 00:08:12.959,00:08:16.396 also, more or less anyone can impersonate anybody you know 00:08:16.396,00:08:19.966 just a few really bad guys. So now I’m going to get to how I 00:08:19.966,00:08:24.337 think it should work. So um there’s an important concept 00:08:24.337,00:08:28.641 which is that instead of saying a CA is trustworthy or it’s not, 00:08:28.641,00:08:34.080 you say that a CA is trusted for certain things and so the the 00:08:34.080,00:08:37.984 policy that makes sense is the name by which you know me, 00:08:37.984,00:08:41.888 implies who you trust to certify my key. So if you know me as 00:08:41.888,00:08:47.293 radia proman EMC dot com, you would trust a CA associated with 00:08:47.293,00:08:52.465 EMC to to validate that key. If you know me as roadrunner 00:08:52.465,00:08:57.337 whatever social network site you’d um trust them to certify 00:08:57.337,00:09:02.976 the key for roadrunner 279. The fact that these identities are 00:09:02.976,00:09:08.148 all the same carbon based life form is totally irrelevant. Um 00:09:08.148,00:09:10.417 [smacks lips] so in order to make this work we need a 00:09:10.417,00:09:15.655 hierarchical name space and tada we have it It’s DNS. So each 00:09:15.655,00:09:20.860 node in the name space is going to be represented by a CA. So um 00:09:20.860,00:09:25.498 here you have a name space, and um there’s a CA associated with 00:09:25.498,00:09:29.869 each one and um what people to tend to think of is everyone’s 00:09:29.869,00:09:34.074 configured with a root key, and the root certifies A dot com’s 00:09:34.074,00:09:38.211 key. A dot com certifies its children but that’s not gonna be 00:09:38.211,00:09:40.880 quite good enough. So I’m going to change it a little bit to be 00:09:40.880,00:09:45.585 what I think is good enough. So um it’s the what’s wrong with 00:09:45.585,00:09:49.456 this model? You still have a monopoly at the root and the 00:09:49.456,00:09:53.760 root can impersonate everybody. So I wanna change that with just 00:09:53.760,00:09:58.565 you know a little thing. Um so the model that I recommend is 00:09:58.565,00:10:01.901 that each arch in the name space doesn’t just have the parents 00:10:01.901,00:10:05.205 certify the key of the child, but vice versa. The child 00:10:05.205,00:10:10.210 certifies the key of the parent. Um so eh that means you don’t 00:10:10.210,00:10:13.480 have to be configured at the root. You can be configured with 00:10:13.480,00:10:18.985 any key and be able to navigate anywhere you want um and uh you 00:10:18.985,00:10:23.123 could start there or you could start even with your own key uh 00:10:23.123,00:10:28.228 but then we’re um um oh and if you start at X dot A dot com, 00:10:28.228,00:10:32.665 when you’re looking for names in the eh little subtree there are 00:10:32.665,00:10:36.803 fewer CAs to trust. It doesn’t matter if the root is completely 00:10:36.803,00:10:39.739 compromised because you’re not gonna go through the root in 00:10:39.739,00:10:45.144 order to get to guys near you in the name space. Um but another 00:10:45.144,00:10:48.681 enhancement is cross certificates where any node in 00:10:48.681,00:10:52.285 the name space can certify any other key and we’re going to 00:10:52.285,00:10:56.389 need it for 2 reasons that I’ll I’ll show you on pictures. Um 00:10:56.389,00:10:59.893 one is so that you don’t have to wait for the whole PKI for the 00:10:59.893,00:11:02.862 whole world to be connected. Instead you could create a 00:11:02.862,00:11:04.430 little PKI in organization A dot com, a little PKI in the 00:11:04.430,00:11:09.435 organization XYZ dot com. They cross certify and then anyone in 00:11:13.573,00:11:19.012 the A dot com name space can reach anybody in XYZ without 00:11:19.012,00:11:23.149 needing the whole thing. Um and the other reason you might wanna 00:11:23.149,00:11:28.054 cross link is to add security. You can bypass portions of the 00:11:28.054,00:11:33.626 hierarchy that you don’t trust. So if um someone down on you 00:11:33.626,00:11:38.731 know on the bottom of that red arrow cross certifies to XYZ, 00:11:38.731,00:11:43.636 then um if you don’t trust the root you can get to XYZ without 00:11:43.636,00:11:47.774 having to go all the way up. So um everybody in that little 00:11:47.774,00:11:52.645 subtree there can get to the names in XYZ dot com without 00:11:52.645,00:11:56.983 having to go through the root. So um ya the navigation rules 00:11:56.983,00:12:01.621 you start somewhere and you go up as much as necessary to 00:12:01.621,00:12:06.259 either get to a common ancestor of the name or to a cross link 00:12:06.259,00:12:09.929 that get’s you to a common ancestor. Now it looks kind of 00:12:09.929,00:12:13.533 like I’ve just done the anarchy thing because anyone can certify 00:12:13.533,00:12:18.037 anybody else but the the rule is you don’t follow a cross link to 00:12:18.037,00:12:21.874 explore where it leads. The only reason you would follow a cross 00:12:21.874,00:12:25.812 link is if it gets you to an ancestor of the name that you 00:12:25.812,00:12:31.217 wanna look up. Um so the advantages of this is that if 00:12:31.217,00:12:35.188 you deployed this in your organization, security which is 00:12:35.188,00:12:39.726 presumably the most important stuff um uh to you which is 00:12:39.726,00:12:44.497 authentication between resources in your own name space um never 00:12:44.497,00:12:49.369 the trust paths never leave your organization so it’s all CAs 00:12:49.369,00:12:52.171 controlled by you. It doesn’t matter, the whole rest of the 00:12:52.171,00:12:56.075 world could be compromised and it won’t affect you. And um you 00:12:56.075,00:12:59.445 know malicious CAs can be bypassed and damage contained. 00:12:59.445,00:13:04.384 Okay so the next um example that I’ll talk about is um um network 00:13:06.886,00:13:10.857 routing. It happens uh to be what my thesis was. Um it’s how 00:13:10.857,00:13:14.394 to build a network that works even if some of the switches are 00:13:14.394,00:13:19.899 malicious? So um a traditional switch looks at a packet and it 00:13:19.899,00:13:23.002 has some sort of forwarding table which tells it which 00:13:23.002,00:13:27.774 direction to send the packet. Um now how how is the forwarding 00:13:27.774,00:13:30.943 table computed? Well often through a distributed algorithm 00:13:30.943,00:13:35.915 where all the switches exchange information and from that decide 00:13:35.915,00:13:40.520 how to compute their own table. So an example of a uh way of 00:13:40.520,00:13:44.590 doing it is called link state routing. Where here I have a 00:13:44.590,00:13:50.263 picture of a network. There’s 7 nodes, there’s a link between 00:13:50.263,00:13:53.032 [lip smacking] B and C with a number 2 which means that they 00:13:53.032,00:13:56.602 are connected and the cost of the link is too. So that’s just 00:13:56.602,00:14:03.042 a simple little network. And um each one of these nodes is 00:14:03.042,00:14:06.679 responsible for creating what I call a link state packet that 00:14:06.679,00:14:11.350 says I am A I have a neighbor B at a cost of 6 and D at a cost 00:14:11.350,00:14:16.589 of 2. And this piece of information is then forwarded to 00:14:16.589,00:14:20.026 everybody else so, everybody will have that data base at the 00:14:20.026,00:14:25.031 bottom of these 1, 2, 3, 4, 5, 6, 7 link state packets and that 00:14:25.031,00:14:28.201 gives you complete information about the graph and you can 00:14:28.201,00:14:33.439 compute paths and all and all that. So um uh that’s a common 00:14:33.439,00:14:36.142 uh routing algorithm and uh the reason I’m telling you this is 00:14:36.142,00:14:41.247 um uh ah um I’m basing kind of the resilient rooting on this. 00:14:41.247,00:14:44.817 So what can a malicious switch do? It can give false 00:14:44.817,00:14:47.687 information in the routing protocol. It can claim to be 00:14:47.687,00:14:51.023 connected to someone it’s not. It could flood the network with 00:14:51.023,00:14:55.628 garbage data. It could forward data in random directions 00:14:55.628,00:14:59.298 resetting the hop counts so things look new or it could do 00:14:59.298,00:15:02.468 everything perfectly but it doesn’t like you so it throws 00:15:02.468,00:15:06.105 away your packets. So there’s all sorts of traditional 00:15:06.105,00:15:09.408 approaches that people have thought about. One is a 00:15:09.408,00:15:13.012 reputation system where you try to decide who the bad guys are. 00:15:13.012,00:15:16.215 I think that would be really hard and especially if somebody 00:15:16.215,00:15:19.318 is doesn’t like you but he’s performing well for everybody 00:15:19.318,00:15:23.489 else you know how would that work? Um and then they also 00:15:23.489,00:15:26.959 think well okay, if you have bad guys you have to do some sort of 00:15:26.959,00:15:30.797 troubleshooting to find them but no eh it can if it notices the 00:15:30.797,00:15:35.034 trouble shooting going on it can start behaving well. Um also in 00:15:35.034,00:15:38.971 some of the rooting protocols they try to do things to enforce 00:15:38.971,00:15:42.842 correctness like if A cuh claims it’s connected to B you won’t 00:15:42.842,00:15:47.079 believe the link unless B also claims to be connected to A. And 00:15:47.079,00:15:51.717 then there’s secure BGP where you take this [giggles] 00:15:51.717,00:15:55.221 Unbelievably uh fragile uh configuration intensive uh 00:15:55.221,00:16:01.561 inefficient protocol and add um um signing for every single hop. 00:16:01.561,00:16:05.865 I sent this and and whatever. But even if these things um you 00:16:05.865,00:16:09.569 know what what can they do? At most they can make sure that the 00:16:09.569,00:16:13.739 routing protocol is behaving properly but you don’t care if 00:16:13.739,00:16:17.176 the routing protocol is working, you care if the data packets get 00:16:17.176,00:16:21.447 delivered. So anyhow I will that has na these things have nothing 00:16:21.447,00:16:25.151 to do with what I did for the thesis. So I wanted to 00:16:25.151,00:16:29.555 guarantee, I proposed this as a thesis topic and um I wanted to 00:16:29.555,00:16:34.427 guarantee uh that A and B can talk provided there’s at least 00:16:34.427,00:16:38.898 one honest path connecting them uh no matter how malicious every 00:16:38.898,00:16:45.705 switch other than that um was. So um you know they they agreed 00:16:45.705,00:16:48.507 oh ya that’s an important problem, it’s difficult and ya 00:16:48.507,00:16:51.277 that would be worth the thesis. And then I thought about it and 00:16:51.277,00:16:54.547 said oh how embarrassing I know how to do it. Um there’s a 00:16:54.547,00:16:57.550 protocol called flooding which is that whenever you receive a 00:16:57.550,00:17:01.487 packet you send it to everyone except who you got it from and 00:17:01.487,00:17:04.523 um presumably you have a hop count so things don’t loop 00:17:04.523,00:17:09.629 forever and this works. Packets from A to B will get there um as 00:17:09.629,00:17:13.633 long as there’s one non faulty path if there’s infinite 00:17:13.633,00:17:17.103 bandwidth and whoops, you know there isn’t infinite resources. 00:17:17.103,00:17:20.840 So it just became a resource allocation problem. So the 00:17:20.840,00:17:24.110 finite resources in this uh switches are the amount of 00:17:24.110,00:17:28.247 compute power, the memory and, the bandwidth. Well compute we 00:17:28.247,00:17:31.918 wave away. We say you can engineer a switch that has 00:17:31.918,00:17:36.956 enough compute power to deal with um the speed of the 00:17:36.956,00:17:40.626 incoming links. So then it’s just memory and switches and 00:17:40.626,00:17:44.730 bandwidth on the links and what you do is you just reserve a 00:17:44.730,00:17:50.469 buffer for each source and for bandwidth you round robin 00:17:50.469,00:17:53.839 through all the packets so every source has a chance to be 00:17:53.839,00:17:58.911 forwarded on every link. And um eh the source will have to sign 00:17:58.911,00:18:03.149 the packets so that you don’t um uh someone can’t inject packets 00:18:03.149,00:18:06.419 and use up your buffer and you also have to put a sequence 00:18:06.419,00:18:10.356 number on it so that the source um uh eh someone won’t inject 00:18:10.356,00:18:14.560 your old packets and starve out your new packets. So this is all 00:18:14.560,00:18:19.765 extremely simple um and I’m not sure whether I could have gotten 00:18:19.765,00:18:22.702 a thesis right there. [laughs] Cause obviously it’s sort of a 00:18:22.702,00:18:27.273 little too simple but it it’s actually very good for flooding. 00:18:27.273,00:18:30.476 If you wanna send something that gets to everybody it’s useful 00:18:30.476,00:18:34.480 for that and it turns out that for actual link state rooting 00:18:34.480,00:18:38.551 this is pretty much how they do it. Um um ya ya know I sort of 00:18:38.551,00:18:41.754 notice that if you just remove all the cryptography from it, 00:18:41.754,00:18:44.790 it’s actually a perfectly reasonable way of flooding 00:18:44.790,00:18:50.596 rooting information um uh but um. There are 2 things that you 00:18:50.596,00:18:54.233 would want to use flooding for. One is so that you don’t have to 00:18:54.233,00:18:57.503 configure everybody with everybody else's public key. 00:18:57.503,00:19:01.407 Instead you have sort of uh a trusted node and everybody knows 00:19:01.407,00:19:06.312 it’s public key and so it can now flood and what it floods is 00:19:06.312,00:19:10.116 hey this is everybody else's public key so it reduces it from 00:19:10.116,00:19:13.119 an N squared problem. And the other thing you use the flooding 00:19:13.119,00:19:18.190 for is link state packets. So we now know how to do robust um 00:19:18.190,00:19:22.061 flooding in a way that’s actually reasonably practical. 00:19:22.061,00:19:26.365 Um but it would not be very efficient to send every data 00:19:26.365,00:19:31.537 packet to everybody. You wanna unicast instead and so what you 00:19:31.537,00:19:35.875 do is you use link state routing, having everybody send 00:19:35.875,00:19:42.815 these link state packets and um the um. The problem though is 00:19:42.815,00:19:46.819 that now everybody has a link state uh database they can 00:19:46.819,00:19:51.223 compute paths but just because a path uh should be there based on 00:19:51.223,00:19:53.859 that link state information doesn’t mean it actually will 00:19:53.859,00:19:57.563 work cause somebody on the path may not like you and may throw 00:19:57.563,00:19:58.898 away your packets. So um um and also traditional network 00:19:58.898,00:20:00.900 routing, I forward the packet to you and then I have to trust you 00:20:00.900,00:20:02.268 to make a good decision. And so I just thought it would be to 00:20:02.268,00:20:08.674 hard to make sure everybody kind of had the same decision. So I 00:20:08.674,00:20:11.777 just said okay let the source have heh it’s fate in it’s own 00:20:11.777,00:20:16.782 hands. The source chooses the path digitally signs something 00:20:21.420,00:20:25.624 saying this is the path I want and the rooters along the path 00:20:25.624,00:20:30.730 remember that and um then reserve resources for that flow. 00:20:30.730,00:20:33.866 So um you know I’m not gonna go into all the details, all these 00:20:33.866,00:20:37.536 things you know you can find um papers about but this is just to 00:20:37.536,00:20:41.273 give you um you know kind of a feeling, a flavor for different 00:20:41.273,00:20:47.246 problems. Um so okay so and then uh people say well how can you 00:20:47.246,00:20:51.650 choose a good path? I dunno, this is where I wave my hands. I 00:20:51.650,00:20:56.956 said if you have had a path to some destination and it worked, 00:20:56.956,00:21:00.126 then you feel good about those routers. If you have a path to 00:21:00.126,00:21:03.129 someone and it doesn’t work because you’re not getting X, 00:21:03.129,00:21:06.165 then you’re suspicious of guys on the path, you don’t know 00:21:06.165,00:21:10.136 which one might be the bad guy and you just try to um you know 00:21:10.136,00:21:13.072 find a different path and hopefully you’ll eventually 00:21:13.072,00:21:16.642 succeed. [clears throat] So it’s not terribly scaleable since 00:21:16.642,00:21:21.480 every path requires state and it requires the source seeing the 00:21:21.480,00:21:26.085 entire path and the way the internet works is you partition 00:21:26.085,00:21:30.856 it. Um hierarchically and so that no body outside of a region 00:21:30.856,00:21:34.693 needs to know the details inside of the region. So there’s a sort 00:21:34.693,00:21:39.498 of more [coughs] recent paper that I uh co wrote on how to do 00:21:39.498,00:21:44.503 this all in a much more scalable way. So um um ya okay so now oh 00:21:46.972,00:21:49.875 how am I doing at slides? I might actually have a chance for 00:21:49.875,00:21:55.447 the funny ones. Okay [laughs] so the the third topic is uh kind 00:21:55.447,00:21:59.785 of a problem that I was looking at which is how to make data 00:21:59.785,00:22:05.724 disappear. Um so you want the data to be resiliently there 00:22:05.724,00:22:09.562 unless you want it gone. So there’s a trade off um you need 00:22:09.562,00:22:12.565 to make a lot of copies to make sure you don’t accidentally lose 00:22:12.565,00:22:15.301 it but if you have a lot of copies it’s hard to make sure 00:22:15.301,00:22:20.206 that it absolutely disappears. So um there’s a paper about that 00:22:20.206,00:22:25.544 as well. Um so we’ll do it based on expiration time so when you 00:22:25.544,00:22:29.515 create a file in the meta data you’re gonna say what the 00:22:29.515,00:22:33.385 expiration date is and then after that, no matter how many 00:22:33.385,00:22:37.489 copies the cloud has made of it once it expires it should be 00:22:37.489,00:22:41.827 impossible to recover. Um even though back up still exists you 00:22:41.827,00:22:45.931 know on mag tape uh disconnected from the network if you find one 00:22:45.931,00:22:49.268 of those it won’t help. So obviously what we’re gonna do is 00:22:49.268,00:22:53.172 encrypt it and then throw away the keys. But to avoid 00:22:53.172,00:22:56.609 prematurely losing the keys, you’re gonna need to make lots 00:22:56.609,00:22:59.245 of copies of the keys and once you make lots of copies of 00:22:59.245,00:23:01.880 something you can never be guaranteed that you’ll get rid 00:23:01.880,00:23:06.552 of all the copies. So how are we going to solve this? Um so the 00:23:06.552,00:23:10.122 first concept to make it sort of a lot more scaleable is you 00:23:10.122,00:23:13.559 decide that all files with the same expiration date can be 00:23:13.559,00:23:18.731 encrypted with the same key. So um that means you only need to 00:23:18.731,00:23:22.968 keep um if you have a granularity of of on a day 00:23:22.968,00:23:29.575 rather than a microsecond for um eh for expiring um um and up to 00:23:29.575,00:23:34.313 30 years, that’s 10,000 keys. So imagine that the cloud or the 00:23:34.313,00:23:37.716 file system keeps a special piece of hardware that it 00:23:37.716,00:23:42.221 doesn’t make copies of um with 10,000 keys. One for each 00:23:42.221,00:23:45.891 expiration date and the file will have what the expiration 00:23:45.891,00:23:50.963 date is in the meta data which will tell you what um what key 00:23:50.963,00:23:57.036 it’s encrypted with and um once the date expires you forget that 00:23:57.036,00:24:01.807 key. So um you know it it’s nice and simple. Um and eh you know 00:24:01.807,00:24:04.843 for cryptographers they would wind sing oh you’re encrypting 00:24:04.843,00:24:08.113 everything with the same key. No you do an extra level of 00:24:08.113,00:24:10.949 [laughs] um indirection where you encrypt the file with a 00:24:10.949,00:24:15.254 random key and you encrypt that key with uh uh per expiration 00:24:15.254,00:24:19.591 date. But um so ya there’s only about 10,000 keys. But the 00:24:19.591,00:24:22.761 question is how do you make a backup of the master keys cause 00:24:22.761,00:24:26.165 I said I didn’t want you to make a lot of copies of this? So that 00:24:26.165,00:24:30.803 would be like pretty bad if that that got lost. So imagine a 00:24:30.803,00:24:34.340 service and I call it an ephemerizer ‘cause it makes 00:24:34.340,00:24:37.910 things ephemeral uh uh you know a word that I made up. So an 00:24:37.910,00:24:41.280 ephemerizer will advertise public keys with expiration 00:24:41.280,00:24:45.984 dates and it’ll throw away the private keys when it expires and 00:24:45.984,00:24:50.956 until it expires it will decrypt things for you. So the 00:24:50.956,00:24:55.260 ephemerizer publically posts, these are 10,000 keys and the 00:24:55.260,00:25:00.165 expiration dates with them um and so the way the storage 00:25:00.165,00:25:04.203 system uses it you’re not gonna talk to ephemerizers unless you 00:25:04.203,00:25:07.840 lose that data base of master keys. So that’s the only time 00:25:07.840,00:25:10.976 you’ll talk to it. So you take the master keys to create a 00:25:10.976,00:25:15.180 backup, you take the January 7th key encrypt it with the 00:25:15.180,00:25:20.085 ephemerizers uh uh um January's 7th key and do this for each of 00:25:20.085,00:25:25.190 your 10,000 keys and that’s what your backup looks like. [lip 00:25:25.190,00:25:28.761 smacks] Um so and you’ll encrypt it one more time but that that’s 00:25:28.761,00:25:32.398 that’s the basic idea. So first I’m going to uh talk about how 00:25:32.398,00:25:36.835 wonderful it is and then I’ll I’ll apologize because I know if 00:25:36.835,00:25:40.472 you think about it for 20 seconds you’ll be annoyed um but 00:25:40.472,00:25:43.275 then I’ll explain why you shouldn’t be annoyed. So you 00:25:43.275,00:25:46.545 only talk to the ephemerizer if your hardware with the master 00:25:46.545,00:25:50.149 keys fails so once every 3 or 4 years maybe you'll talk to the 00:25:50.149,00:25:54.219 ephemerizer. The ephemerizer can support hundreds of millions of 00:25:54.219,00:25:57.890 customers without even knowing they’re the customers. The only 00:25:57.890,00:26:01.760 time you talk to it is when you need to help have it help you 00:26:01.760,00:26:05.597 recover the backup of your keys. And it’s really scaleable, 00:26:05.597,00:26:10.769 10,000 keys works no matter how many customers it is. So now yes 00:26:10.769,00:26:13.739 you know you’ll probably be a bit [giggles] annoyed because 00:26:13.739,00:26:16.608 haven’t I just pushed the problem on to the ephemerizer? 00:26:16.608,00:26:21.780 It has to reliably keep uh the private key until it expires and 00:26:21.780,00:26:25.684 then it has to throw it away. So um there’s 2 ways the 00:26:25.684,00:26:30.322 ephemerizer can fail, one is that it forgets the private key 00:26:30.322,00:26:32.958 prematurely and the other is that it doesn’t forget the 00:26:32.958,00:26:36.428 private key when it um is supposed to. But only think 00:26:36.428,00:26:39.898 about one of them at a time. We can solve both of them. But um 00:26:39.898,00:26:43.969 let’s worry about the prematurely losing the key. So I 00:26:43.969,00:26:48.073 claim um I’m I’m going to design it so the ephemerizer can be 00:26:48.073,00:26:52.644 flakey. Um I want an ephemerizer never to make copies of it’s 00:26:52.644,00:26:56.014 keys. So it will generate the key pairs in tamper proof 00:26:56.014,00:26:59.418 hardware, do the encryption there. There’s only one copy of 00:26:59.418,00:27:03.755 the private key and then you’d say oh gee what happens if it 00:27:03.755,00:27:06.959 fails? Then all of it’s hundreds of millions of customers will 00:27:06.959,00:27:10.462 have an unpleasant surprise if they ever need to get their uh 00:27:10.462,00:27:16.034 backup restored. So um how many copies of the private key do you 00:27:16.034,00:27:19.471 think the ephemerizer should keep in order for you to feel 00:27:19.471,00:27:24.576 safe? Let’s pick a number and lets say 20. Well the trick is 00:27:24.576,00:27:29.214 that instead of using a um really stable ephemerizer that 00:27:29.214,00:27:33.719 keeps 20 copies of it’s keys, it’s easier to make 0 to design 00:27:33.719,00:27:37.389 your system so that there’s no copies of your key than it is to 00:27:37.389,00:27:42.694 limit it to 20. Um so instead of having an ephemerizer with 20 00:27:42.694,00:27:46.398 copies of the keys, you can get exactly the same robustness by 00:27:46.398,00:27:49.902 using 20 independent ephemerizers with independent 00:27:49.902,00:27:54.673 keys. So um the independent ephemerizers can be different 00:27:54.673,00:27:57.843 organizations, different continents, different countries 00:27:57.843,00:28:01.713 um with toe you know they don’t know about each other, they 00:28:01.713,00:28:05.317 don’t coordinate with each other. So when you’re um making 00:28:05.317,00:28:10.455 a backup of your master keys instead of just encrypting S1 00:28:10.455,00:28:13.625 with the first ephemerizers key of that date, you also encrypt 00:28:13.625,00:28:17.095 it with the second ephemerizers key. So if you use 20 00:28:17.095,00:28:20.666 ephemerizers you take what was a trivial amount of storage and 00:28:20.666,00:28:25.070 multiply it by 20 and you still have a trivial amount. So um 00:28:25.070,00:28:29.641 what if the ephemerizer doesn’t destroy the private key when it 00:28:29.641,00:28:32.811 should? Well then you could use a quorum scheme instead of 00:28:32.811,00:28:36.615 taking exactly the key and encrypting it 20 times you break 00:28:36.615,00:28:40.752 it into 20 pieces such that it requires uh 3 out of the 00:28:40.752,00:28:46.024 ephemerizers to to do it. So it’s it’s a very simple um um 00:28:46.024,00:28:52.130 almost zero performance overhead over a traditional system. Um so 00:28:52.130,00:28:53.498 okay so now I want to um ya because we’re gonna be short on 00:28:53.498,00:28:57.703 time the the I just wanna say one other thing which is that 00:28:57.703,00:29:04.009 there’s this really cool thing that um which is a protocol for 00:29:04.009,00:29:07.446 asking the ephemerizer to decrypt. So I want the 00:29:07.446,00:29:12.551 ephemerizer to be kind of a um a relatively untrusted third 00:29:12.551,00:29:17.556 party. So um what you’d like to do is send him the encrypted 00:29:19.825,00:29:23.929 master key for ss january 7th and say please decrypt it and he 00:29:23.929,00:29:28.033 applies his private key but then he’ll see your master key for 00:29:28.033,00:29:33.372 that date and that’s not very good. So instead um I I do what 00:29:33.372,00:29:37.609 I call blind decryption which is that there’s a way of taking 00:29:37.609,00:29:42.114 this encrypted bundle which is uh your january 7th key 00:29:42.114,00:29:45.817 encrypted with his key, encrypting it another time, 00:29:45.817,00:29:50.589 telling him to use his private key on this bundle of bits which 00:29:50.589,00:29:54.760 gets rid of his public key but it’s still encrypted and so he 00:29:54.760,00:29:58.664 gains no information from it. So this is a super lightweight 00:29:58.664,00:30:03.568 protocol. It’s less work than a single SSL connection in terms 00:30:03.568,00:30:07.706 of computation. It’s very few bits, it’s just the size of an 00:30:07.706,00:30:14.046 RSA modulist and um so anyway. Um so I’m going uh these are I 00:30:14.046,00:30:16.615 purposely put these in the slides so incase you actually 00:30:16.615,00:30:21.586 care you’ll have the slides. Um so uh let’s see um so the 00:30:21.586,00:30:24.923 general philosophy is this, is that you can achieve robustness 00:30:24.923,00:30:28.894 by using lots of flakey components and the failures are 00:30:28.894,00:30:32.364 truly independent because it’s different organizations, 00:30:32.364,00:30:37.703 independent clocks and um so forth. Um now let's see in 00:30:37.703,00:30:41.873 contrast and what I would try to tell people about this um they 00:30:41.873,00:30:44.843 would say oh I don’t wanna decide the expiration date when 00:30:44.843,00:30:48.513 I create the file, I wanna like pick a file and delete it and I 00:30:48.513,00:30:51.983 was saying oh but but that’s ugly. Look at this, isn’t this 00:30:51.983,00:30:56.288 cute and pretty and scaleable? And um so I eventually got tired 00:30:56.288,00:30:59.424 of explaining why you couldn’t do that and I figured out how to 00:30:59.424,00:31:04.763 do it and it’s actually um uh scaleable and all that but it’s 00:31:04.763,00:31:07.933 a horrible idea cause it has sort of an interesting failure 00:31:07.933,00:31:12.938 model so I um I’ll tell that. So okay so the concept is that 00:31:15.574,00:31:19.778 every file will have it’s own key. So if you have a million 00:31:19.778,00:31:23.482 files in your file system there’ll be a data structure 00:31:23.482,00:31:27.185 with a million keys in it and every file in the metadata will 00:31:27.185,00:31:31.890 have a key ID that will point to one of these keys so that’s nice 00:31:31.890,00:31:35.260 and simple. When you wanna do in a short delete of a file you 00:31:35.260,00:31:39.664 remove that key. When you make a new file you add a key to that 00:31:39.664,00:31:42.868 thing. So this is all nice and straight forward but how do you 00:31:42.868,00:31:47.806 make a backup of that? Now with the first use of ephemerizers, 00:31:47.806,00:31:51.610 the ephemerizer doesn’t need to know about you at all. Um it’s 00:31:51.610,00:31:56.348 time based and everybody uses the same keys, the same public 00:31:56.348,00:32:00.285 keys at the ephemerizer. Here the ephemerizer is gonna know 2 00:32:00.285,00:32:04.389 keys for every customer, the current public key and the 00:32:04.389,00:32:07.626 previous public key and every week or so you’ll tell em to 00:32:07.626,00:32:11.530 forget the previous public key and give you a new key. So 2 00:32:11.530,00:32:16.168 keys for each customer. So what you do is that every time you 00:32:16.168,00:32:21.506 change the F table you make a snapshot locally with the 00:32:21.506,00:32:26.111 current public key. Meanwhile in remote storage you’ve widely 00:32:26.111,00:32:30.315 replicated some previous snapshot from a week ago with 00:32:30.315,00:32:35.020 the previous public key. Once you decide to replicate the 00:32:35.020,00:32:39.457 current um snapshot widely enough so you feel safe, that 00:32:39.457,00:32:42.861 there’s enough copies. You can tell the ephemerizer to delete 00:32:42.861,00:32:48.500 your old um uh snapshot and uh uh to delete the public key from 00:32:48.500,00:32:53.104 that and then all of those snapshots are gone. So um what’s 00:32:53.104,00:32:57.409 wrong with this? Well the problem is that su um suppose 00:32:57.409,00:33:00.679 you’ve changed the public keys like every week so it takes 00:33:00.679,00:33:04.516 about a week for a file to truly go away what happens if there’s 00:33:04.516,00:33:08.320 a bug in the file system? that it’s corrupting some of those 00:33:08.320,00:33:11.790 keys for files that you really care about but you don’t look at 00:33:11.790,00:33:17.028 very often. So the problem is that um you may not notice for 00:33:17.028,00:33:20.565 months. Once you notice you can't say oh I wonder what 00:33:20.565,00:33:24.269 happened? It must’ve been when we fired Fred you know he 00:33:24.269,00:33:27.906 maliciously did something or when we installed this this 00:33:27.906,00:33:32.510 patch or something. Um you can’t back up. Eh you can only back up 00:33:32.510,00:33:36.514 a week and that’s really scary. Where as with the time base 00:33:36.514,00:33:40.318 thing it’s much safer because as long as everything was working 00:33:40.318,00:33:45.090 properly when you got your file um stored and you made enough 00:33:45.090,00:33:49.794 replicas no subsequent um you know compromise of the file 00:33:49.794,00:33:53.265 system can hurt you. But here everything was great and 6 00:33:53.265,00:33:56.401 months later there’s a bug in the file system and your file is 00:33:56.401,00:34:01.873 gone. So um anyhow so those are 3 very different problems with 00:34:01.873,00:34:05.377 different solutions and um you know for more uh this just gives 00:34:05.377,00:34:10.282 you a taste of each of those. And I have time oh I was really 00:34:10.282,00:34:12.550 rushing through cause I desperately wanted to get to the 00:34:12.550,00:34:16.521 other slides um so let’s see if I can figure out how to do this 00:34:16.521,00:34:21.526 while people are um okay. Okay so um. Alright so now I’m going 00:34:26.498,00:34:30.535 to talk about how people build ss a systems that cause the the 00:34:30.535,00:34:36.207 human to misbehave and it’s not like the human is malicious, 00:34:36.207,00:34:39.978 it’s just the the people design systems really badly and there’s 00:34:39.978,00:34:44.015 a quote at the end which I actually wrote in in in my book 00:34:44.015,00:34:49.688 and I hope everyone memorizes it and takes it to heart. Um so 00:34:49.688,00:34:53.325 [sighs] training users to type your username and password. So 00:34:53.325,00:34:56.361 of course we’re not supposed to share our password with anybody 00:34:56.361,00:35:01.266 but Outlook pops up these boxes every once in awhile asking me 00:35:01.266,00:35:04.636 to type my password and it’s not because I did anything 00:35:04.636,00:35:09.174 different. It’s some server some uh where um went down and so I 00:35:09.174,00:35:10.508 have to um uh the you know right um ya so um uh you know this I 00:35:10.508,00:35:11.843 just and I’m sure most people are just trained to when you see 00:35:11.843,00:35:16.848 a pop up like that you type your password in and any malicious 00:35:24.055,00:35:28.093 code can pop up these things. Um there’s no way to tell the 00:35:28.093,00:35:34.165 difference. Um this is kind of an interesting one, um embedding 00:35:34.165,00:35:39.504 the social engineering criminals. So um there are these 00:35:39.504,00:35:42.807 people that make phone calls all the time saying we’re from 00:35:42.807,00:35:47.545 Microsoft your machine is infe fected and I was curi uh curious 00:35:47.545,00:35:51.082 enough to play along with them and I’m great at playing the you 00:35:51.082,00:35:53.385 know helpless um [laughs] you know whatever [laughs] uh 00:35:53.385,00:35:57.722 clueless whatever you know I said what machine? All of them. 00:35:57.722,00:36:00.025 Really? How many do I have? [laughs] you know it’s like 00:36:00.025,00:36:05.030 whatever. Um [laughs] [crowd laughter] [applause] So um ya so 00:36:07.132,00:36:10.635 he said well um first I said how do I know you're from Microsoft? 00:36:10.635,00:36:14.239 He gave me a telephone number, I called the telephone number and 00:36:14.239,00:36:19.310 he answers and says Microsoft IT [laughs] [crowd laughter] and so 00:36:19.310,00:36:23.348 um he said I’ll show you that your machine is infected. So 00:36:23.348,00:36:28.086 apparently there’s this thing called a vent viewer. I’m a 00:36:28.086,00:36:33.024 human I have know idea what a vent viewer is there for um 00:36:33.024,00:36:36.428 [chuckles] and um so he showed me how to bring it up and it’s 00:36:36.428,00:36:40.398 full of all these warnings and errors, it looks terrible. Um 00:36:40.398,00:36:44.502 and of course the warnings are and errors are things that no 00:36:44.502,00:36:48.306 human could possibly know what it’s about but you just look it 00:36:48.306,00:36:51.509 you open this thing and it’s incredibly alarming. So 00:36:51.509,00:36:54.412 obviously there’s something horribly wrong with my machine. 00:36:54.412,00:36:58.416 Um you know so the the next thing that of course he says 00:36:58.416,00:37:03.021 well I will fix it for you and um you know allow me access to 00:37:03.021,00:37:06.257 your machine remotely and most people would just go ahead and 00:37:06.257,00:37:09.427 do this but of course you know. Um I did know enough to do that 00:37:09.427,00:37:12.263 at that point I said oh look at the time I have to pick up my 00:37:12.263,00:37:15.600 kids at daycare he said no no this is important. I said ya so 00:37:15.600,00:37:20.605 are my kids and he said no no this is more important [laughs] 00:37:20.605,00:37:24.676 [crowd laughter] More important than my kids? Yes it is but um 00:37:24.676,00:37:29.614 ya. [laughs] So um anyway, um so it’s common to have to trade off 00:37:29.614,00:37:33.118 usability versus security. So you would expect to see some 00:37:33.118,00:37:36.488 sort of graph like this. The more useable it is the less 00:37:36.488,00:37:40.558 secure, the more secure it is the less useable. But what our 00:37:40.558,00:37:45.563 industry has managed to succeed at is coming at that point. 00:37:47.932,00:37:49.901 [laughs] [crowd laughter] Alright just minimally secure 00:37:49.901,00:37:53.037 minimally useable [inhales] You know every site has different 00:37:53.037,00:37:56.040 rules for usernames and passwords you know it it has to 00:37:56.040,00:37:59.477 be at least N characters or no more than X characters. It must 00:37:59.477,00:38:02.680 have special characters, it must not have special characters and 00:38:02.680,00:38:05.650 there was this great thing on the web that I wish I knew who 00:38:05.650,00:38:09.320 did it but it’s anonymous I guess. Sorry but your password 00:38:09.320,00:38:12.991 must contain an uppercase letter, a number, a haiku, a 00:38:12.991,00:38:17.328 gangsign, a hieroglyph, and the blood of a virgin. [laughs] 00:38:20.498,00:38:23.668 [crowd laughter] And um recently I had to set a password and I 00:38:23.668,00:38:27.272 got the message your password does not meet our length 00:38:27.272,00:38:30.441 complexity or history rules. It didn’t even tell me what the 00:38:30.441,00:38:35.213 rules are. [laughs] You know and also if you forget your password 00:38:35.213,00:38:38.316 it should there should be a way that you say tell me what your 00:38:38.316,00:38:41.586 rules are and then you might remember what it is and then you 00:38:41.586,00:38:44.722 know so okay uh ye ye it doesn’t let you do that. You have to 00:38:44.722,00:38:48.026 reset your password and it won’t let you set it to what you just 00:38:48.026,00:38:50.795 had been using before that would’ve been perfectly secure 00:38:50.795,00:38:55.567 had you not forgotten it. Um anyway, um and then security 00:38:55.567,00:38:58.837 questions. Who comes up with these? This was an actual set 00:38:58.837,00:39:01.706 that I encountered once. Father’s middle name, my father 00:39:01.706,00:39:05.443 doesn’t have a middle name. Second grade teacher’s name. I 00:39:05.443,00:39:08.313 couldn’t remember my second grade teacher’s name when I was 00:39:08.313,00:39:11.783 in second grade. [crowd laughter] [chuckles] 00:39:11.783,00:39:14.886 Veterinarian's name uh I don’t have a pet. Favorite sports 00:39:14.886,00:39:20.892 team. What’s a sport? And [crowd laughter] [laughs] and my middle 00:39:20.892,00:39:23.962 name. Well luckily I do have a middle name, it’s Joy. So I 00:39:23.962,00:39:28.499 typed Joy. It’s J, O, Y, and it said not enough letters. [crowd 00:39:31.569,00:39:32.904 laughter] [laughs] and then you know there are all these 00:39:32.904,00:39:36.174 annoying rules that add nothing to security. You must change 00:39:36.174,00:39:39.510 your password at least every N days that doesn’t make it more 00:39:39.510,00:39:43.581 secure. Um you know these sa sorts of rules actually lower 00:39:43.581,00:39:46.784 security. So I had a friend in IT and I said why do you do 00:39:46.784,00:39:50.255 this? Do just just eh you know it doesn’t add to security? Do 00:39:50.255,00:39:55.026 you like torturing users? And he said yes that’s of course that’s 00:39:55.026,00:39:58.162 the best part of the job but that’s not why we do it. It’s 00:39:58.162,00:40:01.399 because there are these documents with best practices 00:40:01.399,00:40:04.269 and if you follow them to the letter you have a much better 00:40:04.269,00:40:07.739 defense than to say well I looked at those rules and these 00:40:07.739,00:40:13.244 don’t make sense. So I do not want to hear we need better user 00:40:13.244,00:40:17.515 training. You know I uh people say oh the stupid users they 00:40:17.515,00:40:21.386 click on suspicious links you know like what’s a link? What’s 00:40:21.386,00:40:26.524 a suspicious link? [chuckles] Um um ya so here is you know the 00:40:26.524,00:40:31.362 thing that the paragraph that I wrote that really kind of tells 00:40:31.362,00:40:35.233 it you know right, everybody memorize it. Humans are 00:40:35.233,00:40:38.136 incapable of securely storing really high quality 00:40:38.136,00:40:42.240 cryptographic keys and they have unacceptable speed and accuracy 00:40:42.240,00:40:46.611 when performing cryptographic operations. They’re also large, 00:40:46.611,00:40:50.348 expensive to maintain, difficult to manage, and the pollute the 00:40:50.348,00:40:54.285 environment. It is astonishing that these devices continue to 00:40:54.285,00:40:58.222 be manufactured and deployed but they are sufficiently pervasive 00:40:58.222,00:41:01.125 that we must design our systems around their limitations. 00:41:11.135,00:41:13.004 [laugh] [crowd laughter] So thank you. [applause] So um 00:41:13.004,00:41:16.574 surprisingly I have like 3 minutes. I I suppose somebody 00:41:16.574,00:41:19.544 could ask a question if they wanted or we could uh let I mean 00:41:19.544,00:41:20.878 ya I mean all be around the whole time and please say hello 00:41:20.878,00:41:22.880 and [chuckles] ask questions and whatever. So I there’s no goons 00:41:22.880,00:41:24.215 here oh a question yes? >>Quick question for you the uh the 00:41:24.215,00:41:25.550 cross trust you mentioned earlier on at the very 00:41:25.550,00:41:26.918 beginning, I do know that that was set up through different 00:41:26.918,00:41:28.252 parts of the US government code awhile ago so that is being 00:41:28.252,00:41:29.821 done? On the ephemeral trust I don’t see why that would still 00:41:29.821,00:41:31.155 be uh a risk for bad actors especially plural to start 00:41:31.155,00:41:34.559 compromising a a key and then putting bad information in the 00:41:34.559,00:41:39.030 network. >>Uh ya I mean you if all of your ephemerizers were 00:41:39.030,00:41:44.035 bad than you're in trouble but um hopefully there’s kind of 00:41:57.015,00:42:01.953 enough different ones and so forth and ya the cross thing 00:42:09.093,00:42:13.264 it’s very frustrating to me. Um my friend was involved in DNSsec 00:42:13.264,00:42:16.100 early on and he got them to put into the uh in the up 00:42:16.100,00:42:18.770 certificate and the cross certificate and then he stopped 00:42:18.770,00:42:22.206 going. Nobody could figure out went there. So the DNSsec is is 00:42:22.206,00:42:25.610 uh up down but ya we’re we’re gonna have to people need to um 00:42:25.610,00:42:31.682 leave so just come up and talk to me. So um ya thank you all. 00:42:31.682,00:42:32.417 [applause]