Okay, welcome everyone. We're going to get started here. But I'm going to welcome up Radia Perlman for her speak. Thank you. Okay, so yeah, this talk, it's actually sort of not as funny as my usual talks, but if I manage to get it done quickly enough, I have some funny slides at the end. But at any rate, yeah, so I'm going to talk about how to build systems where some of the components don't just crash, but they start misbehaving. Um, and the, um, I'll give a few examples. The reason being that I'm not going to give you a cookbook that if you do A, B, and C, you'll have a system that's resilient even if you have malicious participants. Um, I'm giving you three different examples to show you the variety of problems and the variety of solutions. So it's, there is no cookbook, it's just that I want people to understand. Um, so I'm going to talk about how to build systems that are resilient and another way that I want people to be able to be aware of this issue. So, um, the term for kind of not crashing, but behaving badly is called, um, a Byzantine failure out of an ancient, um, uh, computer science problem, um, known as the Byzantine generals problem. Um, so all sorts of things can be subverted by a very small number of malicious participants. And there's this, um, talk that you should Google. These slides are, um, you all have the slides, so you can, uh, you don't have to, uh, if you're interested copy down the URL but it was about trying to do crowdsourcing some sort of puzzle and there were thousands of participants and ultimately there was one bad guy that managed to completely you know they had to give up and that is astonishing so read that paper it's really cool but things that work and all my intuition says there's no way they possibly could. It's like Wikipedia. How you know it ought to be just full of people ranting about you know whatever but instead remarkably concise, well written, accurate things and ebay too. You know like you want to buy something. You search. You find it in you know from some vendor you never heard of in a country that you like, you can't just buy them and have them thriving for fucks sake, we just want to you never heard of. You send the money and you get the widget that you bought. It's amazing. So I'll talk really about three different examples. And so the first example I'm going to talk about is trust models for PKI and I'll explain what that means. So what's a PKI? It's a way for me to find out what your key is. So I want to talk about trust models where dishonest or confused CAs can't damage the entire world. And so first I'm going to talk about the three models that are currently deployed and why none of those will work and then I'll talk about how I think it should work which is actually quite simple and I have no idea why they don't do it. So a quick review of public keys. There's this thing called the CA that signs a certificate which is a little message saying, um, you know, Alice's public key is this number here. And then she store it and then she puts the number and, and signs it. So when Alice wants to talk to Bob, she sends her certificate. Bob now knows her public key. And Bob sends his certificate and now they can authenticate and encrypt and do all that good stuff. So, what people tend to think about is academics worry about the math. Um, you know, the provable security of the math algorithms they use. And standards bodies worry about the format of the certificates. Both of those things are important but people don't really think about the trust model. It's voila, here's a format for certificates. Um and no that's not the end of it. So I'll um I've given them sort of cute names these various models. So the first model I'll talk about I called the monopoly model. Where you choose one organization universally beloved and trusted by absolutely everybody. You embed their public key in absolutely every um thing. Everybody um um has to get certificates from them and it's nice and simple. So Alice is configured to trust monopolist.org's uh public key. Bob sends a certificate signed by monopolist.org and everything's fine. But what's wrong with it? Well there is no such thing as a universally trusted organization. There's also the problem of monopoly pricing because once the more widely deployed it gets the harder it is to convert so you're willing to pay sort of more money than to try to reconfigure everything. And that one organization can impersonate anybody. Uh um the next model is kind of what's in your browsers and I call it the oligarchy model. Where um uh you're configured with a hundred or so trusted public keys and um a server you talk to can present you a certificate signed by any of those and so it does eliminate monopoly pricing uh so Alice is configured to trust any of a large set of public keys for CAs and Bob sends a certificate signed by any of them and um what's wrong with it? Well it's less secure because now there's hundreds of organizations that all you have to do is find a um an employee that can be bribed or threatened or confused into issuing a key. Um so now an important enhancement is um the ability to have chains of certificates. So Bob does not need to have a certificate signed by something Alice immediately knows but he can present a s- uh a chain. So um Alice is trusted to um trusts X1 and Bob gives three certificates. One where X1 says that alpha is X2's key. Uh another where um X2 says beta is X3's key and beta says um um beta is X3's key. So Alice is trusted to trust that X3 is um uh uh yeah uh yeah uh says that delta I guess is Bob's key. So with these three certificates um Alice can um find his key. Um so the next model I'll talk about I call anarchy. Um so nobody tells you who to trust. Um you just meet people, you believe they're trustworthy, you get their public key on a business card or something, you configure it into a key, and every time you have a gathering of nerds like I know at IETF meetings maybe you have it here, you have a PGP key signing party with some sort of ritual where you say who you are and what your key is and anyone can sign certificates and then you have public databases where um if you've signed certificates you can donate your certificates into the database and if you need to search for a key you can look through these public databases to piece together a chain to the name you want. So for instance if Alice is configured to trust those two keys um and Bob's um you know there might be a certificate in there some way that says alpha is Bob's key and maybe a different certificate somewhere in this web of whatever that says beta is Bob's key, somehow Alice has to piece together a chain through this enormous database and then how can she know whether it's actually trustworthy. So it's the kind of thing that you can deploy it and it seems great. Um as long as it's in a small community of all trustworthy people. But once you have bad guys who are going to pollute the database with bad stuff it um but also just the sheer size of it with billions of people um each one can sign 10 certificates just the sheer size of it won't work. So now I'll talk about oh and um also more or less anyone can impersonate anybody you know just a few really bad guys. So now I'm going to get to how I think it should work. So um there's an important concept which is that instead of saying a CA is trustworthy or it's not you say a CA is trusted for certain things. And so the the policy that makes sense is the name by which you know me implies who you trust to certify my key. So if you know me as radiaperlmanemc.com you would trust a CA associated with EMC to certify my key. So if you know me as radiaperlmanemc.com you would trust me to to validate that key. If you know me as roadrunner whatever social network site you'd um trust them to certify the key for roadrunner 279. The fact that these identities are all the same carbon based life form is totally irrelevant. Um so in order to make this work we need a hierarchical namespace and ta-da we have it. It's DNS. So each node in the namespace is going to be represented by a CA. So um here you have a namespace and um there's a CA associated with each one. And um what people tend to think of is everyone's configured with the root key and the root certifies a.com's key. A.com certifies its children. But that's not going to be quite good enough. So I'm going to change it a little bit to be what I think is good enough. So um it's the what's wrong with this model. You still have a monopoly at the root and the root can impersonate everybody. So I want to change that with just you know a little thing. Um so the model that I recommend is that each arc in the namespace doesn't just have the parents certify the key of the child but vice versa. The child certifies the key of the parent. Um so that means that you don't have to be configured at the root. You can be configured with any key and be able to navigate anywhere you want. Um so here's the whole point of the thing. That every child should be configured with buttons and then there's the parent. And this tells you classroom, you can play a game or you can play anywhere um and uh you could start there or you could start even with your own key. Uh but then where um um oh and if you start at x.a.com when you're looking for names in that little subtree there are fewer CA's to trust. It doesn't matter if the root is completely compromised because you're not gonna go through the root in order to get to guys near you in the namespace. Um but another enhancement is cross certification. So we're going to start out as a child who's not going to be a child. So here's the where any node in the namespace can certify any other key. And we are going to need it for two reasons that I'll uh show you on pictures. Um one is so that you don't have to wait for the whole PKI for the whole world to be connected. Instead you could create a little PKI in organization A.com, a little PKI in the organization XYZ.com, they cross certify and then anyone in the A.com namespace can reach anybody in XYZ without needing the whole thing. Um and the other reason you might want to cross link is to add security. You can bypass portions of the hierarchy that you don't trust. So if um someone down on you know on the bottom of that red arrow cross certifies to XYZ then um if you don't trust the root you can get to XYZ and XYZ without having to go all the way up. So um everybody in that little sub-tree there can get to the names in XYZ.com without having to go through the root. So um you know the navigation rules you start somewhere and you go up as much as necessary to either get to a common ancestor of the name or to a cross link that gets you to a common ancestor. Now it looks kind of like I've just done the anarchy thing because any of you have done the anarchy thing and you've got a cross link that anyone can certify anybody else. But the the rule is you don't follow a cross link to explore where it leads. The only reason you would follow a cross link is if it gets you to an ancestor of the name that you want to look up. Um so the advantages of this is that if you deploy this in your organization security which is presumably the most important stuff um uh to you which is authentication between resources in your database um never the trust paths never leave your organization. So it's all CAS controlled by you. It doesn't matter the whole rest of the world can be compromised and it won't affect you. And um you know, malicious CAS can be bypassed and damage contained. Okay. So the next um example that I'll talk about is um um network routing. It happens to be what my thesis was um it's how to build a network that works even when access is not available. And it's essentially a lot of the if some of the switches are malicious. So, um, a traditional switch looks at a packet and it has some sort of forwarding table which tells it which direction to send the packet. Um, now how, how is the forwarding table computed? Well, often through a distributed algorithm where all the switches exchange information and from that decide how to compute their own table. So, an example of a, uh, way of doing it is called link state routing, where here I have a picture of a network. There's seven nodes, there's a link between B and C with a number two, which means that they are connected and the cost of the link is two. So, that's just a simple little network. And each one of these, um, nodes is responsible for creating what I call a link state packet. That's a simple little network. Um, and each one of these nodes says, I am A, I have a neighbor B at a cost of six and D at a cost of two. And this piece of information is then forwarded to everybody else. So, everybody will have that database at the bottom of these one, two, three, four, five, six, seven link state packets. And that gives you complete information about the graph and you can compute paths and, uh, and all that. So, um, uh, that's a common, uh, routing algorithm. And, uh, the reason I'm telling you this is, um, uh, um, I'm basing kind of the resilient routing on this. So, what can a malicious switch do? It can give false information in the routing protocol. It can claim to be connected to someone it's not. It could flood the network with garbage data. It could forward data in random directions, resetting the hop counts so things look new. Or it could do everything perfectly, but it doesn't like you, so it throws away your packets. So, there's a lot of different ways to do this. So, there's a lot of different ways to do this. So, there's a lot of different ways to do this. So, there's all sorts of traditional approaches that people have thought about. One is a reputation system, where you try to decide who the bad guys are. I think that would be really hard. And especially if somebody is, doesn't like you, but he's performing well for everybody else, you know, how would that work? Um, and then they also think, well, okay, if you have bad guys, you have to do some sort of troubleshooting to find them. But no, it can, if it notices the troubleshooting going on, it can start behaving well. Um, also, in some of the routing protocols, they try to do things to enforce correctness. Like, if A claims it's connected to B, you won't believe the link unless B also claims to be connected to A. And then there's secure BGP where you take this unbelievably, uh, fragile, uh, configuration intensive, um, inefficient protocol and add, um, um, signing for every single hop. I sent this and, and whatever. But even if these things, um, you know, what, what can they do? At most they can make sure that the routing protocol is behaving properly, but you don't care if the routing protocol is working, you care if the data packets get delivered. So anyhow, I, that has no, these things have nothing to do with what I did for the thesis. So I wanted to guarantee, I proposed this as a thesis topic, and, um, I wanted to guarantee, uh, that A and B can talk, provided there's at least one honest path connecting them. Um, no matter how malicious every switch other than that, um, was. So, um, you know, they, they agreed, oh yeah, that's an important problem, it's difficult, and yeah, that would be worth a thesis. And then I thought about it and said, oh, how embarrassing, I know how to do it. Um, there's a protocol called flooding, which is that whenever you receive a packet, you send it to everyone except who you got it from. And, um, and, um, and, um, and, um, and, um, and, um, and, um, and, um, presumably you have a hop count, so things don't loop forever. And this works. Packets from A to B will get there, um, as long as there's one non-faulty path, if there's infinite bandwidth. And whoops, you know, there isn't infinite resources. So it just became a resource allocation problem. So the finite resources in this, uh, switches are the amount of compute power, the memory, and the bandwidth. Well, compute, we wave away. We say you can engineer a switch that has enough compute power to deal with, um, the speed of the incoming links. So then it's just memory and switches and bandwidth on the links. And what you do is you just reserve a buffer for each source. And for bandwidth, you round robin through all the packets. So every source has a chance to be forwarded on every link. And, um, the source will have to sign the packet so that you don't, um, someone can't inject packets and use up your buffer. And you also have to put a sequence number on it so that the source, um, uh, someone won't inject your old packets and starve out your new packets. So this is all extremely simple. Um, and I'm not sure whether I could have gotten a thesis right there, because, um, it's sort of a little too simple. But it's, it's actually very good for flooding. If you want to send something that gets to everybody, it's useful. So, um, the, it's useful for that. And it turns out that for actual link state routing, this is pretty much how they do it. Um, um, you know, I sort of noticed that if you just remove all the cryptography from it, it's actually a perfectly reasonable way of flooding routing information. Um, uh, but, um, there are two things that you would want to use flooding for. One is so that you don't have to configure everybody with everybody else's public key. Instead you have sort of a, a trusted node and everybody knows it's public key, and so it can now flood. And what it floods is, hey, this is everybody else's public key. So it reduces it from an N squared problem. And the other thing you use the flooding for is link state packets. So we now know how to do robust, um, flooding in a way that's actually reasonably practical. Um, but it would not be very efficient to send every data packet to everybody. You want a unicast instead. And, and, um, if I, if I do use, if I use a data and so what you do is you use link state routing, having everybody send these link state packets and um the um the problem though is that now everybody has a link state uh database they can compute paths but just because a path uh should be there based on that link state information doesn't mean it actually will work cause somebody on the path may not like you and may throw away your packets. So um um and also traditional network routing I forward the packet to you and then I have to trust you to make a good decision and so I just thought it would be too hard to make sure everybody kind of had the same decision so I just said okay let the source have it's fate in it's own hands, the source chooses a path, digitally signs something saying this is the path that I want and the routers along the path. Remember that this is the path that I want and the routers along the path. Remember that this is the path that I want and then um then reserve resources for that flow. So um you know I'm not going to go into all the details all these things you know you can find um papers about but this is just to give you um you know kind of a feeling, a flavor for different problems. Um so okay. So oh and then people say well how can you choose a good path? I don't know this is where I waved my hands. I said if you have had a path to some destination and that it worked, then you feel good about those routers. If you have a path to someone and it doesn't work because you're not getting x, then you're suspicious of the guys on the path. You don't know which one might be the bad guy and you just try to um, you know, find a different path and hopefully you'll eventually succeed. So it's not terribly scalable since every path requires state and it requires the source seeing the entire path and the way the internet works is you partition it um, hierarchically and so that nobody outside of a region needs to know the details inside of the region. So there's a sort of more recent paper that I uh, co-wrote on how to do this all in a much more scalable way. So um, um, yeah, okay. So now, oh how am I doing on slides? I might actually have a chance for the funny ones. Okay. So the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the the produits could be chosen. So no, the uh, use of Chop ف creating and a means to say I want the sewerage, um there's a third, a third possibility sort of already coming up at the end of the talk but I'm something that's still the default out of my own knowledge. Um,��서 Patterns of, of Of all the paths and of all the sorts that are used, you've got to separate the commands of the practices and the practices of the panelists are then what you're saying but I can't, I can't answer that question. So um, well, um here, is what you're saying but let me look at this oil balance here. Alright. So, um,Е конечно Professor, do it based on expiration time. So when you create a file in the metadata you're going to say what the expiration date is and then after that no matter how many copies the cloud has made of it once it expires it should be impossible to recover. Um even though backups still exist you know on mag tape uh disconnected from the network if you find one of those it won't help. So obviously what we're going to do is encrypt it and then throw away the keys but to avoid prematurely losing the keys you're going to need to make lots of copies of the keys and once you make lots of copies of something you can never be guaranteed that you'll get rid of all the copies. So how are we going to solve this? Um so the first concept to make it sort of a lot more scalable is you decide that all files with the same expiration date can be encrypted with the same key. So um that means you only need to keep um if you have a granularity of a file you're going to need to make sure that all files with the same expiration date have the same number of copies of uh on a day rather than a microsecond for um for expiring um um and up to 30 years, that's 10,000 keys. So imagine that the cloud or the file system keeps a special piece of hardware that it doesn't make copies of um with 10,000 keys. One for each expiration date and a file will have what the expiration date is in the meta data which will tell you what um what key it's encrypted with. Um so we're going to do some encrypted with. And um once the date expires you forget that key. So um you know it's nice and simple. Um and you know for cryptographers they would win saying oh you're encrypting everything with the same key. No you do an extra level of um indirection where you encrypt the file with a random key and you encrypt that key with the uh per expiration date but um so yeah there's only about 10,000 keys. But the question is how do you make a backup of the master keys? Cause I said I didn't want you to make a lot of copies of those. So that would be like pretty bad if that that got lost. So imagine a service and I call it an ephemerizer cause it makes things ephemeral. You know a word that I made up. So an ephemerizer will advertise public keys with expiration dates and it'll throw away the private keys when it expires and until it expires it's not going to be able to use the private keys. So the ephemerizer will decrypt things for you. So the ephemerizer publicly posts these are 10,000 keys and the expiration dates with them. Um and so the way the storage system uses it you're not going to talk to ephemerizers unless you lose that database of master keys. So that's the only time you'll talk to it. So you take the master keys to create a backup. You take the January 7th key, encrypt it with the ephemerizers uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh uh January 7th key and do this for each of your 10,000 keys and that's what your backup looks like. Um so and you'll encrypt it one more time but that that's that's the basic idea. So first I'm going to talk about how wonderful it is and then I'll I'll apologize because I know if you think about it for 20 seconds you'll be annoyed. Um but then I'll explain why you shouldn't be annoyed. So you only talk to the ephemerizer if you're hardware with master keys. Fails. So once every 3 or 4 years maybe you'll talk to the ephemerizer. The ephemerizer can support hundreds of millions of customers without even knowing they're the customers. The only time you talk to it is when you need to help have it help you recover the backup of your keys. And it's really scalable. 10,000 keys works no matter how many customers it is. So now yes you know you'll probably be a bit annoyed because haven't I just pushed the problem onto the ephemerizer. It has to reliably keep uh the private key until it expires and then it has to throw it away. So um there's 2 ways the ephemerizer can fail. One is that it forgets the private key prematurely and the other is that it doesn't forget the private key when it um is supposed to. But only think about one of them at a time. We can solve both of them. But um let's worry about the prematurely losing the key. So I claim um uh I'm I'm not going to be going to design it so the ephemerizer can be flaky. Um I want an ephemerizer never to make copies of its keys. So it will generate the key pairs and tamper proof hardware do the encryption there. There's only one copy of the private key. And then you'd say oh gee what happens if it fails? Then all of its hundreds of millions of customers will have an unpleasant surprise if they ever need to get their uh backup restored. So um how many copies of the keys do you think the ephemerizer should keep in order for you to feel safe? Let's pick a number and let's say 20. Well the trick is that instead of using a um really stable ephemerizer that keeps 20 copies of its keys, it's easier to make 0 to design your system so that there's no copies of your key than it is to limit it to 20. Um so instead of having an ephemerizer with 20 copies of the keys you can get exactly the same robustness but with a little bit less. But using 20 independent ephemerizers with independent keys. So um the independent ephemerizers can be different organizations, different continents, different countries, um with total you know they don't know about each other, they don't coordinate with each other. So when you're um making a backup of your master keys instead of just encrypting s1 with the first ephemerizer's key of that date you also encrypt it with the second ephemerizer's key. Um so you've got a whole bunch of keys and so the key pairs and so So if you use 20 ephemerizers, you take what was a trivial amount of storage and multiply it by 20, and you still have a trivial amount. So what if the ephemerizer doesn't destroy the private key when it should? Well, then you could use a quorum scheme. Instead of taking exactly the key and encrypting it 20 times, you break it into 20 pieces such that it requires three out of the ephemerizers to do it. So it's a very simple, almost zero performance overhead over a traditional system. So now I want to, yeah, because we're going to be short on time, I just want to say one other thing, which is that there's this really cool thing, which is a protocol for asking the ephemerizer to decrypt. So I want the ephemerizer to be kind of a relatively untrusted third party. So what you'd like to do is send him the encrypted master key for January 7th and say, please decrypt it, and he applies his private key. But then he'll see your master key for that date, and that's not very good. So instead, I do what I call blind decryption, which is that there's a way of taking this encrypted bundle, which is your January 7th key encrypted bundle, with his key, encrypting it another time, telling him to use his private key on this bundle of bits, which gets rid of his public key, but it's still encrypted. And so he gains no information from it. So this is a super lightweight protocol. It's less work than a single SSL connection in terms of computation. It's very few bits. It's just the size of an RSA modulus. And so anyway. So I purposely put these in the slides, so in case you actually care, you'll have the slides. So let's see. So the general philosophy is this, is that you can achieve robustness by using lots of flaky components, and the failures are truly independent because it's different organizations, independent clocks, and so forth. Now, let's see. In contrast, when I would try to tell people about this, they would say, oh, I don't know. I don't want to decide the expiration date when I create the file. I want to, like, pick a file and delete it. And I was saying, oh, but that's ugly. Look at this. Isn't this cute and pretty and scalable? And so I eventually got tired of explaining why you couldn't do that, and I figured out how to do it. And it's actually scalable and all that, but it's a horrible idea because it has sort of an interesting failure model. So I'll tell that. So, okay. So the concept is that every file will have its own key. So if you have a million files in your file system, there'll be a data structure with a million keys in it, and every file in the metadata will have a key ID that will point to one of these keys. So that's nice and simple. When you want to do an assured delete of a file, you remove that key. When you make a new file, you add a key to that thing. So this is all nice and straightforward, but how do you make a backup of that? Now, with the first use of ephemerisers, the ephemeriser doesn't need to know about you at all. It's time-based, and everybody uses the same keys, the same public keys of the ephemeriser. Here, the ephemeriser is going to know two keys for every customer, the current public key and the previous public key. And every week or so, you'll tell them to forget the previous public key and give you a new key. So two keys. So the ephemeriser is going to know two keys for each customer. So what you do is that every time you change this F table, you make a snapshot locally with the current public key. Meanwhile, in remote storage, you've widely replicated some previous snapshot from a week ago with the previous public key. Once you decide to replicate the current snapshot widely enough so you feel safe that there's enough copies, you can tell the ephemeriser, to delete your old snapshot and to delete the public key from that. And then all of those snapshots are gone. So what's wrong with this? Well, the problem is that suppose you change the public keys like every week. So it takes about a week for a file to truly go away. What happens if there's a bug in the file system that it's corrupting some of those keys for files that you really care about, but you don't look at? You don't look at it very often. So the problem is that you may not notice for months. Once you notice, you can't say, oh, I wonder what happened. Oh, it must have been when we fired Fred. You know, he maliciously did something. Or when we installed this patch or something. You can't back up. You can only back up a week. And that's really scary. Whereas with the time-based thing, it's much safer because as long as everything was working properly, when you got your ephemeriser, you got your file stored, and you made enough replicas, no subsequent compromise of the file system can hurt you. But here, everything was great, and six months later there's a bug in the file system, and your file is gone. So, anyhow, those are three very different problems with different solutions. And for more, this just gives you a taste of each of those. And I have time. Oh, I was really rushing through because I desperately wanted to get to the other slides. So let's see if I can figure out how to do this while people are... Okay. Okay, so... All right. So now I'm going to talk about how people build systems that cause the human to misbehave. And it's not like the human is malicious. It's just people design systems really badly. And there's a quote at the end, which I actually wrote in my book, and I hope everyone memorizes it and takes it to heart. So, training users to type your username and password. So, of course, we're not supposed to share our password with anybody, but Outlook pops up these boxes every once in a while asking me to type my password. And it's not because I did anything different. It's some server somewhere went down, and so I have to, you know... Right. Yeah. So, you know, this... I just... And I'm sure most people are just trained to... When you see a pop-up like that, you type your password in. And any malicious code can pop up these things. There's no way to tell the difference. This is kind of an interesting one. Abetting the social engineering criminals. So, there are these people that make phone calls all the time saying, we're from Microsoft. Your machine is infected. And I was curious enough to play along with them. And I'm great at playing the, you know, helpless, you know, whatever. Clueless, whatever. You know, I said, what machine? All of them. Really? How many do I have? You know, it's like, whatever. So... Yeah. So, he said... Well, first I said, how do I know you're from Microsoft? He gave me a telephone number. I called the telephone number. And he answers and says, Microsoft IT. And so, he said, I'll show you that your machine is infected. So, apparently, there's this thing called event viewer. I'm a human. I have no idea what event viewer is there for. And so, he showed me how to bring it up. And it's full of all these warnings and errors. It looks terrible. And, of course, the warnings and errors are things that no human could possibly know what it's about. But you just look at... You open this thing, and it's incredibly alarming. So, obviously, there's something horribly wrong with my machine. You know, so the next thing, of course, he says, well, I will fix it for you. And... You know, allow me access to your machine remotely. And most people would just go ahead and do this. But, of course, you know, I did know enough to do that. At that point, I said, oh, look at the time I have to pick up my kids at daycare. He said, no, no, this is important. I said, yeah, so are my kids. And he said, no, no, this is more important. More important than my kids? Yes, it is. But, yeah. So, anyway. So, it's common to have to trade off usability versus security. So, you would expect to see some sort of graph like this. That the more usable it is, the less secure. The more secure it is, the less usable. But what our industry has managed to succeed at is coming at that point. Just minimally secure, minimally usable. You know, every site has different rules for usernames and passwords. You know, it has to be at least N characters or no more than X characters. It must have special characters. It must not have special characters. And there was this great thing on the web that I wish I knew who did it. But it's anonymous, I guess. Sorry, but your password must contain an uppercase letter, a number, a haiku, a gang sign, a hieroglyph, and the blood of a virgin. And recently, I had to set a password. And I got the message, your password does not meet our length, complexity, or history rules. It didn't even tell me what that was. I don't know what the rules are. You know, and also, if you forget your password, it should, there should be a way that you say, tell me what your rules are. And then you might remember what it is. And then, you know, so, okay. It doesn't let you do that. You have to reset your password. And it won't let you set it to what you just had been using before that would have been perfectly secure had you not forgotten it. Anyway. And then security questions. Who comes up with these? This was an actual set that I encountered once. Father's middle name. My father doesn't have a middle name. Second grade teacher's name. I couldn't remember my second grade teacher's name when I was in second grade. Veterinarian's name. I don't have a pet. Favorite sports team. What's a sport? And my middle name. Well, luckily, I do have a middle name. It's Joy. So I typed Joy, J-O-Y, and it said, not enough letters. And then, you know, there are all these annoying rules that add nothing to security. You must change your password at least every N days. That doesn't make it more secure. You know, these sorts of rules actually lower security. So I had a friend in IT and I said, why do you do this? Do you just, you know it doesn't add to security. Do you like torturing users? And he said, yes, that's, of course, that's the best part of the job. But that's not why we do it. It's because there are these documents with best practices. And if you follow them to the letter, you have a much better defense than to say, well, I looked at those rules and these don't make sense. So I do not want to hear we need better user training. You know, people say, oh, the stupid users, they click on suspicious links. You know, like, what's a link? What's a suspicious link? Yeah. So here is, you know, the thing that, you know, the paragraph that I wrote that really kind of tells it, you know, right. Everybody memorize it. Humans are incapable of securely storing high quality cryptographic keys. And they have unacceptable speed and accuracy when performing cryptographic operations. They're also large, expensive to maintain, difficult to manage, and they pollute the environment. It is astonishing that these devices continue to be manufactured and deployed. But they are sufficiently perfect. And it is so pervasive that we must design our systems around their limitations. So thank you. So surprisingly, I have like three minutes. I suppose somebody could ask a question if they wanted. Or we could let, I mean, I'll be around the whole time. And please say hello. Ask questions and whatever. There's no goons here. Oh, a question. Yes. Quick question for you. The cross trust you mentioned earlier on at the very beginning. I do know that that was set up through different parts of the U.S. government a while ago. So that is being done. On the ephemeral trust, I don't see why that wouldn't still be a risk for bad actors, especially plural, to start compromising a key and then putting bad information in the network. Yeah. I mean, if all of your ephemeralizers were bad, then you're in trouble. But hopefully there's kind of enough different ones and so forth. And, yeah, the cross thing, it's very frustrating to me. My friend was involved in DNSSEC early on. And he got them to put in the up certificate and the cross certificate. And then he stopped going. Nobody could figure out what their. So the DNSSEC is up down. But, yeah, we're going to have to, people need to leave. So just come up and talk to me. So, yeah, thank you all.