00:00:00.000,00:00:05.839 Alright everybody for Abusing Bleeding Edge Web Standards for AppSec Glory we have Bryant 00:00:05.839,00:00:10.844 Zadegan and Ryan Lester, please give them a big hand. [Applause] >> Ah alright so everyone please 00:00:18.652,00:00:23.657 take a moment and visit our website [cough] I know it looks scary but trust me it's part of 00:00:25.726,00:00:31.098 a hands on component of our session. >> So >> So >> So actually hold on for a second 00:00:31.098,00:00:35.469 Ryan, why did you go with this site? >> Well the Internet was all out of domains, I swear this 00:00:35.469,00:00:41.241 one was the last one they had. >> So actually something that we noticed in the course of setting 00:00:41.241,00:00:45.546 this up uhm if you're using an Android phone actually if you're using any phone at all 00:00:45.546,00:00:51.485 congratulations on your bravery ah we applaud you. But um if you use an Android phone with the 00:00:51.485,00:00:57.257 configuration on this domain for whatever reason you might get a cert error so all other phones 00:00:57.257,00:01:02.195 seem not to have a problem with it so there you go. In case you decide not to take the risk this 00:01:05.265,00:01:10.270 is what you'll see. So moving on [Laughter] [Applause] >> That was a much better reception than 00:01:14.975,00:01:20.580 Black Hat - they didn't get the joke at all. [Laughter] >> So welcome to Abusing Bleeding Edge 00:01:20.580,00:01:26.119 Web Standards for AppSec Glory. Ah my name is Bryant and >> I'm Ryan >> And that's Ryan and 00:01:26.119,00:01:29.690 we're not used to >> Oh I'm sorry I forgot to >> Right we're not used to these mics we're 00:01:29.690,00:01:35.562 still adjusting from lapel mics. >> If you didn't hear, I'm Ryan >> So just some background on 00:01:35.562,00:01:40.567 me, I do a lot of appsec related stuff. Uh I mentor security startups, sometimes. Uh I also 00:01:43.270,00:01:48.542 mentor others, mentor in air-quotes, in application security on occasion. Once upon 00:01:48.542,00:01:53.547 a time I paid a friend a dollar to make Steve Ballmer dance on stage but just once. >> And I'm 00:01:57.451,00:02:02.489 the CEO of an end to end encrypted communication startup called Cyph which is actually 00:02:02.489,00:02:06.893 the origin of a lot of the research that went into this talk. I'm also more or less the 00:02:06.893,00:02:12.065 chief architect and primary developer of Cyph. Ah before Cyph I was a software engineer 00:02:12.065,00:02:18.305 at a rocket factory called SpaceX and ah at once point I was sued by Napster for alleged 00:02:18.305,00:02:23.310 trademark infringement. >> Are you allowed to talk about that? >> Ah not in detail, no. >> OK 00:02:26.013,00:02:32.119 uhm OK we'll just leave it at that then. >> Settlement lawsuit stuff. >> Fair enough. So in 00:02:32.119,00:02:37.858 talking about bleeding edge web standards I want to kind of clarify some of like the key 00:02:37.858,00:02:42.396 words here. You'll notice the first word in the title of the talk is abusing. Most people 00:02:42.396,00:02:47.901 here the word abusing and they assume hacking in a very like breakery way but we're speaking 00:02:47.901,00:02:52.706 not just in terms of hacking and like classical hacking that you and I might be used to but also 00:02:52.706,00:02:58.412 hacking from like a developer standpoint so coming up with novel solutions to problems that 00:02:58.412,00:03:03.016 are completely unanticipated and what have you so very much like developer hacking dev hacking. 00:03:03.016,00:03:07.254 Uhm as well as classical hacking that many breakers in this room might be used to. >> So we're 00:03:07.254,00:03:11.124 going to focus on three web standards that you guys hopefully know about but I'll do 00:03:11.124,00:03:16.730 like a show of hands as we going along just to see how we are. Uhm let's see first of all we 00:03:16.730,00:03:20.967 have sub-resource integrity show of a quick show of hands as to who in the room who is familiar 00:03:20.967,00:03:27.874 with browser side standards might know what SRI is. >> OK not bad ah so we're going to 00:03:27.874,00:03:32.712 talk about specifically that we're calling SRI fall back. Ah you can probably anticipate what 00:03:32.712,00:03:37.217 that means. We'll go through it as we're walking through it so. We're also going to touch on 00:03:37.217,00:03:43.924 content security policy ah show of hands. Ah nice, OK that's good. >> So we're going to talk 00:03:43.924,00:03:47.694 about something that we're calling CSP meta hardening here. So we're not really going to 00:03:47.694,00:03:52.566 teach about what Content Security Policy is ah but we will talk about how you can use 00:03:52.566,00:03:57.604 it in some edge cases that a number of startups might be encountering, things like that. 00:03:57.604,00:04:03.243 And this last one is probably going to be about half of the talk uhm so we're focusing on 00:04:03.243,00:04:09.983 HTTP Public Key Pinning. >> Show of hands. Awesome. Now the thing that we're going to focus on 00:04:09.983,00:04:14.988 here is something that we're calling HPKP suicide. So why? >> Now here's the thing ah new 00:04:19.126,00:04:23.530 standards are being drafted left and right and if anyone has been keeping track of the creation of 00:04:23.530,00:04:28.401 of webs standards like browser side standards things like that. you'll notice that the pace has 00:04:28.401,00:04:33.006 seen a bit of uptick uh there are quite a few standards that most people don't actually know 00:04:33.006,00:04:38.011 about it. Ah has anyone heard of CAA? [Silent pause] >> There's a hand back there. >> There's like 00:04:40.280,00:04:45.252 three hands in the whole room for a live standard in production that's currently 00:04:45.252,00:04:50.257 being widely used. OK so now that we've established that uhm it's not just the standards that 00:04:52.359,00:04:56.696 are being created at a rapid pace that are creating unforeseen complications, it's 00:04:56.696,00:04:59.966 also that the implementations which are you know happening rapidly, there implementation 00:04:59.966,00:05:05.272 because of the pace at which they are being developed can be a bit screwy. So when you start 00:05:05.272,00:05:09.676 messing around with standards and especially obscure specs in said standards, obscure use 00:05:09.676,00:05:15.615 cases uh you can probably find extremely novel ways of suing this standards or extremely 00:05:15.615,00:05:20.187 novel ways of breaking them. >> I mean in the course of this talk Ryan and I found we hit 00:05:20.187,00:05:25.125 what two bounties on Chrome just completely by accident we didn't create a fuzz or anything we 00:05:25.125,00:05:30.130 just scored a a quick 2500 bucks just in the course of making this talk. So kind of diving in 00:05:32.599,00:05:36.403 SRI now we're actually going to get into the meat of it. Uhm A lot of the stuff should actually 00:05:36.403,00:05:40.473 be pretty easy to use it s once we get to HPKP that things become a bit risky for everybody 00:05:40.473,00:05:47.080 so. >> Ah yeah sure so SRI Sub Resource Integrity it's one of the standards Bryant was just 00:05:47.080,00:05:52.752 discussing. It's just a way for you to assure the integrity of resources hosted outside of your 00:05:52.752,00:05:59.025 zone of trust. Ah and in the example here we've got query loaded from their CDN. Ah we'd 00:05:59.025,00:06:05.732 also be using a fall-back source if ah the spec actually provided something like that. so they 00:06:05.732,00:06:11.004 mention the possibility and kind of give general guidance on how you might implement it yourself 00:06:11.004,00:06:15.742 but they don't give you direct way that you can just use it out of the box so we decided to 00:06:15.742,00:06:21.281 implement it for you. [Cough] >> We have a script called fallback SRC. Ah that's what we call it 00:06:21.281,00:06:26.186 right? Uhm well anyway fallback's a source script you just add this XSRI fallback 00:06:26.186,00:06:32.325 attribute to any of your scripts or style sheets and in the event that the primary source fails to 00:06:32.325,00:06:37.330 validate the uh new one will be injected and validated against the same hashes. So we've got a 00:06:39.966,00:06:43.536 >> Quick >> Sorry >> I think we're actually skipping this demo, are we? >> Oh yeah, yeah 00:06:43.536,00:06:48.575 we're skipping this one for the sake of time but it's there if you want to see it. And that's 00:06:48.575,00:06:53.413 the source code. >> So we'll also have at the very end, on the very last slide we're going 00:06:53.413,00:06:57.984 to have one link that aggregates all the links that we're going put on screen today so if you 00:06:57.984,00:07:01.655 don't want to have to worry about catching the pictures in time don't worry about it, wait 00:07:01.655,00:07:05.225 until the last slide and take a picture of that and you'll get everything so but while we're 00:07:05.225,00:07:11.231 here we'll just very quickly show about the quick two grand that we knocked off Chrome when 00:07:11.231,00:07:16.236 we were in the midst of creating the talk. And ah so yeah I mean I mentioned very early on that 00:07:18.672,00:07:23.944 you can actually in the course of testing novel web standards that were just introduced or 00:07:23.944,00:07:27.947 like very recently introduced or what have you maybe there wasn't enough time for testing, 00:07:27.947,00:07:34.087 whichever. Ah you can in the course of testing them out ah very quickly score some quick 00:07:34.087,00:07:38.425 and easy cash. And the one we're going to talk about here is a case where if somebody, uhm 00:07:38.425,00:07:44.264 manages to hack around with SRI in a same origin use case uhm in an older version of Chrome you 00:07:44.264,00:07:48.601 actually could have gotten a script to run the second time around and I'll let Ryan kind of 00:07:48.601,00:07:53.807 take on the pre-scripted demo. >> Sure so like Bryan said we found this by accident. This was 00:07:53.807,00:07:58.545 supposed to be the demo that we were going to use for an early version of this talk uh just 00:07:58.545,00:08:02.982 demoing SRI. >> By by the way for people who actually think we can't read it we're actually 00:08:02.982,00:08:07.854 going to zoom in on the key parts as we go through so. >> Right so we've got ah just two 00:08:07.854,00:08:12.325 buttons here, one that injects the script with a valid hash, one that does invalid. Click on 00:08:12.325,00:08:18.431 the invalid hash ah you can see there's an SRI error there as expected so loading the script 00:08:18.431,00:08:23.436 failed. Then you click the button a second time and it works when it shouldn't. >> So 00:08:28.441,00:08:32.011 let's see, that one if you wanted a quick use case for how you could have exploited that 00:08:32.011,00:08:35.482 flaw well if you happen to actually compromise a site especially one that has like 00:08:35.482,00:08:39.786 infinite page loads and it would be constantly reloading the same script or maybe like the same 00:08:39.786,00:08:44.791 XSS payload that would be one potential are where you could exploit that. So Google ended up 00:08:44.791,00:08:49.763 marking that one as a high and giving a quick two thousand dollar payout. So >> Now that 00:08:49.763,00:08:54.601 one having been discussed, we've talked about SRI we've kind of showed off the script and how 00:08:54.601,00:08:59.806 you can implement a fallback in the event that you want to have some way of loading backup 00:08:59.806,00:09:03.510 content in the event that your main resource that you're loading offsite can't load, now 00:09:03.510,00:09:05.512 that we've talked about that, let's move onto CSP and how you can combine some interesting 00:09:05.512,00:09:08.047 properties in CSP to do novel things. So we've got something that we're calling CSP meta 00:09:08.047,00:09:11.251 hardening uhm and what this is is you're combining a semi-strict header. What his 00:09:11.251,00:09:16.256 means is that it's a header that's it doesn't have all the rules you want defined but it 00:09:24.464,00:09:29.035 has like it has a lot of leeway to do things that might otherwise be considered 00:09:29.035,00:09:35.442 dangerous. And what this allows you to do is it allows you to load trusted complex logic. Ah 00:09:35.442,00:09:40.613 that being said this trick that we're going to show because it relies on meta headers there are 00:09:40.613,00:09:45.718 some verbs that don't work ah frame ancestors or port URI or sandbox they they won't work in 00:09:45.718,00:09:51.224 meta headers. There are others as well uhm but we're get into them in a bit. We do I believe 00:09:51.224,00:09:55.495 have yeah we do have a demo for this also pre-scripted but again if you visit the URLs you can 00:09:55.495,00:10:00.500 see these demos in practice. And see how they actually work just right-click dev-tools uhm and 00:10:00.500,00:10:04.737 actually watch ah watch how it works in the console, watch what elements are introduced things 00:10:04.737,00:10:09.742 like that. So and I'll let Ryan take this one on. >> Sure. Ah so it's the same general format as 00:10:12.011,00:10:15.882 the UI of the previous demo. We've got three buttons here and it shows the current content 00:10:15.882,00:10:21.020 security policy. So running inline code should work because we have unsafe inline and you 00:10:21.020,00:10:26.993 can see it did . Non inline code or code from the current origin that also works and then we have 00:10:26.993,00:10:32.999 a third button to harden CSP via meta element so when you click that it actually injects into 00:10:32.999,00:10:39.038 the DOM an HTTP equivalent meta-header with a more restricted CSP. So you can see 00:10:39.038,00:10:44.043 now unsafe inline is no longer in the CSP and now running ah inline code will generate a CSP 00:10:46.446,00:10:51.451 error. >> And we'll get into some considerations as well but we're going to talk about the 00:10:54.153,00:10:57.790 we'll get into some considerations as as you're setting this up for your own 00:10:57.790,00:11:01.995 sites. Uhm and we'll also talk about potential use cases but first of all if you want to use 00:11:01.995,00:11:07.734 this uhm if you want to actually use this for your own sites, let's say you've got, let's say 00:11:07.734,00:11:11.104 you've put in a lot of development effort into like an MVP. You're like your typical 00:11:11.104,00:11:15.275 silicon valley shop, you've put in a lot of development effort and security maybe wasn't at the 00:11:15.275,00:11:19.412 front of your mind uhm and you want to implement this well let's say you're doing your 00:11:19.412,00:11:25.218 typical like single page app. Ah, these are some considerations to make this work 00:11:25.218,00:11:29.322 perfectly otherwise there are some attacks that could work against this that would defeat 00:11:29.322,00:11:33.927 your entire use of CSP in the first place. Ah so when we say static content only that means 00:11:33.927,00:11:39.365 that if the initial response, if everybody in this room I hope is familiar with reflective XSS, if 00:11:39.365,00:11:45.605 your initial response from the web server actually contains reflected content from the user 00:11:45.605,00:11:52.078 in the request then you're going to have that content execute during your relaxed content 00:11:52.078,00:11:57.617 security policies. Which defeats the point. Uhm there is another potential attack we can go 00:11:57.617,00:12:03.189 through after the talk just for the sake of time uhm we'll say blindly trust us when we say 00:12:03.189,00:12:07.794 include this header ah in your in your headers that you're sending to the browser. Ah the 00:12:07.794,00:12:12.999 XSS protection ah blocking mode. You definitely want this because there are some novel attacks 00:12:12.999,00:12:16.135 that would work if you don't include this but implement meta-hardening in the first 00:12:16.135,00:12:23.009 place. So in terms of uses cases so I mentioned that if you have a semi-re like let's say a 00:12:23.009,00:12:28.281 semi-recent application, you put a lot of time into implementing this and you haven't really 00:12:28.281,00:12:33.453 taken security into account, well if you think about your typical really complex single 00:12:33.453,00:12:37.890 page applications. I mean anything what like Google Apps I'm just winging it at this 00:12:37.890,00:12:41.861 point based on the UIs but any application that does a lot of pre-loading of content like 00:12:41.861,00:12:48.001 angular based apps things of the sort this is where this can work really well for you but that 00:12:48.001,00:12:53.006 being said it should be used as a stop gap to getting full CSP on your site. Reason being so 00:12:56.442,00:13:01.748 long as you have relaxed CSP headers you run the risk of anybody being able to get 00:13:01.748,00:13:05.418 content that you haven't authorized to execute before you harden your headers with the 00:13:05.418,00:13:10.423 meta tags. So this is the idea your application's static content loads first and then 00:13:12.458,00:13:16.663 loading first while you have the relaxed headers it's all the content that you trust your 00:13:16.663,00:13:21.801 application does all of its preloading into the DOM and it gets set. Uhm once it's done 00:13:21.801,00:13:27.674 being setup at that point you then inject the meta headers into the DOM and then the 00:13:27.674,00:13:32.779 browser takes up those rules, hardens you content security policy and then you can start 00:13:32.779,00:13:38.718 taking in dynamic content from be it other web servers or what have you. You can only do this 00:13:38.718,00:13:44.290 one way. You can't relax headers, you cannot relax policies this way you can only 00:13:44.290,00:13:48.494 strengthen them so you cannot introduce meta headers that then relax content security policy 00:13:48.494,00:13:54.834 down the road. So. Now we get to like the meat of the talk. This is actually where all the cool 00:13:54.834,00:13:58.938 stuff happens. So anybody that is looking for really breakery stuff we've got a really good 00:13:58.938,00:14:03.876 chunk of stuff at the end that might excite some of you. So let's talk about HPKP. So for 00:14:06.212,00:14:11.317 those of you who might not be familiar with HPKP ah we have it color coded in red because if 00:14:11.317,00:14:16.355 you do it wrong you will brick your site and you will brick your site for up to sixty days. 00:14:16.355,00:14:20.393 Ah for for users that are using your site assuming you accidentally set it up 00:14:20.393,00:14:26.265 incorrectly. So we have a sample header set here like ah a sample ah pin set here for you ah that 00:14:26.265,00:14:32.438 if you were to implement HPKP it would be in report only, it wouldn't break anything if if 00:14:32.438,00:14:37.243 you end up doing it wrong and and it's have the max age configured perfectly you just 00:14:37.243,00:14:42.248 need to replace the the keys for your hashes well the hashes for your keys ah with the actual 00:14:44.784,00:14:48.354 hashes for the keys that you are serving up. So my recommendation if you implement this you want 00:14:48.354,00:14:52.125 to serve up all the hashes for all the keys you are using across all domains and sub 00:14:52.125,00:14:56.963 domains that's what the include sub-domains keyword is there for as well. This is probably the 00:14:56.963,00:15:01.200 safest way to get started and then once you've had it in place for a while then you can drop 00:15:01.200,00:15:05.938 off the report only and and it's essentially like hard mode so if something goes wrong at that 00:15:05.938,00:15:12.845 point then that's when people get locked out. So when we talk about HPKP suicide well then 00:15:12.845,00:15:17.850 what do we mean. So here's the thing so in a nutshell you're deliberately self bricking your 00:15:22.255,00:15:26.359 users that's what we're talking about here in the spec itself nobody has ever actually 00:15:26.359,00:15:32.999 considered the possibility of deliberately bricking your users to enable new functionality and 00:15:32.999,00:15:37.203 that's what we're going to talk about over the next what twentyish minutes. So we've got 00:15:37.203,00:15:40.873 some ideas. There's the possibility of enabling in browser code signing but we'll 00:15:40.873,00:15:46.445 explain why we scratched this out in a moment. We'll also follow that up with a solution 00:15:46.445,00:15:53.085 that works almost as well by controlling content changes and also hardening SRI. We'll also 00:15:53.085,00:15:57.123 talk about nuanced web content blocking so if you; re familiar with your typical web content 00:15:57.123,00:16:00.927 gateways like your Nanny Filters things like that that would be in like high schools or 00:16:00.927,00:16:03.296 corporate environments that like you know making sure that you don't HPKP can be used to 00:16:03.296,00:16:08.301 further that work and the Black Hat audience was very very receptive of this it was really 00:16:13.239,00:16:17.543 interesting. I'm pretty sure that half this room would probably hate that. You can also 00:16:17.543,00:16:24.083 use this to track users this can probably scare you and we'll also talk about how you can use 00:16:24.083,00:16:29.889 this to be total jerks in ways that we shouldn't really put in print. >> And before we continue 00:16:29.889,00:16:35.161 I wanted to give a shout out to Jan Horn and Cure 53. Ah he was actually the one who put us onto 00:16:35.161,00:16:40.600 this idea during the course of an audit of Cyph ah last year and DigiCert as well actually 00:16:40.600,00:16:45.638 before Lets Encrypt existed to make this idea easier to implement worked with us on 00:16:45.638,00:16:50.643 making it possible in Cyph. Ah so here we've got an HPKP suicide based local content 00:16:54.313,00:16:59.652 pining scheme. Ah which intentionally self bricks your own website to pin an app cache 00:16:59.652,00:17:04.924 or service worker ah persistently in the browser with the same level of security 00:17:04.924,00:17:09.962 guarantee that HPKP provides. Ah so first the user is just visiting your website like 00:17:09.962,00:17:16.035 normal and it's setting an app cache or a service worker and then on the back end the server 00:17:16.035,00:17:22.575 is actually deleting its own TLS private key, generating entirely new key pair and requesting a 00:17:22.575,00:17:28.981 new artificer from its CA and then of course changing the HPKP headers to compensate for that. 00:17:28.981,00:17:34.020 An then the next time the user hits that site the TLS handshake will fail and it will be to the 00:17:34.020,00:17:36.022 browser as if the server is literally off-line, forcing it to fall-back on the cached 00:17:36.022,00:17:38.024 service work. >> So let me put that in human terms. If 'cause this is perfectly understandable 00:17:38.024,00:17:40.026 for anyone that is familiar with sequence with actual sequence diagrams right but the basic 00:17:40.026,00:17:42.028 idea here is this. You using HPKP suicide to deny your end users access back to the web 00:17:42.028,00:17:44.030 server after they've already cached some sort of document from that server and in denying 00:17:44.030,00:17:46.032 access back to the web-server on future visits it's the document that you cached that is then 00:17:46.032,00:17:48.034 loaded because every subsequent connection back to the web server has to kind of go through 00:17:48.034,00:17:50.036 that service worker first because it's it's has been stored in the browser right so 00:17:50.036,00:17:52.038 goes through that service worker first which can then handle the error in the event that the 00:17:52.038,00:17:57.043 connection fails but in this case the service worker is deliberately anticipating that 00:18:15.928,00:18:20.933 the connection to fail because the keys will mismatch and therefore that is where you can 00:18:37.416,00:18:42.054 embed the extra logic for how to for how your application actually wants to run such as 00:18:42.054,00:18:46.993 loading resources from other sub-domains that haven't been pinned within the actual service 00:18:46.993,00:18:51.030 worker itself. That's the key here and that's that's essentially how this entire 00:18:51.030,00:18:55.534 scheme works and it's enabled by the fact that you're rotating the keys on a very rapid 00:18:55.534,00:19:01.807 cadence. That way you're actually deliberately locking people out as time goes on. So 00:19:01.807,00:19:06.879 we've actually got a really novel use case here and Ryan's going to talk about it. >> Yeah 00:19:06.879,00:19:12.151 ah so we ah in this slide earlier we had code-signing as a possible application of this 00:19:12.151,00:19:19.025 crossed out. So I mean why not? Uhm in theory you could use this local content pinning scheme to 00:19:19.025,00:19:24.797 pin your code signing logic. Uhm [cough] Did it just skip the whole slide? >> Yeah you just 00:19:24.797,00:19:31.070 skipped the last slide. >> Ah >> Go for it. >> Alright sorry. ah so in theory that should work 00:19:31.070,00:19:37.943 more or less giving you trust on first use and uhm so why in theory it sounds like it should 00:19:37.943,00:19:43.249 work. In fact Cyph employs a mature audited implementation of exactly this that we call web 00:19:43.249,00:19:49.355 sign however it was considered novel enough that we were advised to apply for a patent on 00:19:49.355,00:19:56.162 it. Ah purely defensively but no one else can do it now but you can come pretty close to the 00:19:56.162,00:20:02.334 benefits by following the scheme that Bryant is about to describe. >> Right so you'll 00:20:02.334,00:20:07.106 probably get to about 85ish percent effectiveness of the code signing scheme that Ryan 00:20:07.106,00:20:14.113 just talked about. if you combine HPKP with sub resources integrity and yeah the basic 00:20:14.113,00:20:20.386 thinking here is this in that service worker that you've got that you pin in the browser. 00:20:20.386,00:20:26.125 Every reference to every other resource on other domains is going to have your integrity 00:20:26.125,00:20:32.231 checks through SRI, right? Now in order to make this work since you're locking users away from 00:20:32.231,00:20:37.069 accessing the content what you need to do is you need to have the max age, the actual counter 00:20:37.069,00:20:42.441 that says how long this header is valid for dynamically count down to whenever you routinely 00:20:42.441,00:20:48.214 deploy a new application into production. So let's say you deploy a new version into prod 00:20:48.214,00:20:53.219 let's say Sunday at 4:20 pm in that case what you're going to have is you have the actual max 00:20:55.888,00:21:00.826 age counting down to that date and every time a person visits the max age is going to be 00:21:02.995,00:21:07.199 different for all them but that being said it doesn't make a difference. When their max when 00:21:07.199,00:21:12.304 their HPKP headers expire they pull down any new content, any new hashes for scripts that are 00:21:12.304,00:21:17.309 off site things of that sort. Ah and as a result because the other scripts are being checked 00:21:19.745,00:21:24.750 against the hashes that you've stored you're still it's not really the same as code signing 00:21:24.750,00:21:29.755 but it gets you pretty close and and the reasoning here is that no attacker can replace the 00:21:32.491,00:21:37.596 initial content the initial service worker that you've pinned in the browser they can't 00:21:37.596,00:21:42.601 replace that because when the browser then tries to connect again to the site it's the 00:21:42.601,00:21:46.739 connection is going to bomb out. Now of course the only way this will work is if you're rotating 00:21:46.739,00:21:52.711 the keys once every what is the cadence on Lets Encrypt like 20 times a week so like once every 00:21:52.711,00:21:54.713 eightish hours maybe? >> Eight point four hours >> Eight point four hours ah something like 00:21:54.713,00:22:00.986 that so you're rekeying once every eight point fourish hours and that means if a visitor if a 00:22:00.986,00:22:05.124 user visits within the first eight like visits and then visits again like nine hours 00:22:05.124,00:22:10.162 later that connection bombs out that content that they that was pinned, the service worker can 00:22:10.162,00:22:16.068 never be updated until those headers expire. Now what are looking at in terms of benefits. 00:22:16.068,00:22:21.273 You're retaining control of front end content between releases, right. So that means 00:22:21.273,00:22:26.545 that in the event that let's say your main your main web server your content servers get 00:22:26.545,00:22:31.217 compromised be it the ones that have the SRI protected content or the ones serving that initial 00:22:31.217,00:22:34.353 page, the service worker, it doesn't matter because the people that are visiting your 00:22:34.353,00:22:38.624 site ah will have already pinned the content locally. Ah and this also means that you're 00:22:38.624,00:22:42.394 mitigating the risk of somebody like you know tampering hashes as part of a much broader attack 00:22:42.394,00:22:47.032 against your content that mm might rely on resources that might have also been compromised 00:22:47.032,00:22:53.172 in domains that you don't control. so you get some pretty decent security and performance 00:22:53.172,00:22:58.177 gains but there's a catch. HPKP suicide and SRI it's a bit of a design time decision this isn't 00:23:01.247,00:23:05.951 as far as we can tell going to work with anything other than a single page app. Single page app 00:23:05.951,00:23:10.222 like I mentioned earlier being the kind of app that loads everything up front and then 00:23:10.222,00:23:14.793 dynamically loads everything through web service calls. Here's the thing you also need 00:23:14.793,00:23:19.665 to include mitigations like halting the distribution of HPKP headers if your site's 00:23:19.665,00:23:25.104 compromised. Well why? Because if your site gets popped because now you're serving and pinning 00:23:25.104,00:23:30.476 malicious content into your user's browsers so you need to be careful about making sure 00:23:30.476,00:23:36.882 that your content if you see evidence of tampering isn't actually going to server back 00:23:36.882,00:23:43.122 the HPKP headers that pin that content in your browser in your users' browsers. so time 00:23:43.122,00:23:48.794 permitting we'll actually go and checkout a demo on a site what redskins dot io. We're both form 00:23:48.794,00:23:53.532 DC that's kind of the gag there. >> It's not completely random. >> Yeah it's not completely 00:23:53.532,00:23:58.771 random, there is actually some some sort of an inside joke there. Now we'll take this one a 00:23:58.771,00:24:02.875 bit further. Let's say that you've got a web content gateway, like Bluecoat, right I 00:24:02.875,00:24:08.347 love love picking on Bluecoat because I've only ever had to deal with them. So here's what 00:24:08.347,00:24:13.585 they can do. They can actually look any web content gateway that implements HPKP because 00:24:13.585,00:24:16.555 they're already intercepting connections like they've OK they've got SSL man in the 00:24:16.555,00:24:20.526 middle TLS man in the middle, they're already seeing all of your traffic, your, your banking 00:24:20.526,00:24:25.798 transactions, your mail, whatever, right? As part of the design. Ah they can actually 00:24:25.798,00:24:31.070 lock users out of malicious sites or flag sites or porn sites or whatever. Ah even when 00:24:31.070,00:24:35.507 they're not, even when your users are not on the network like let's say they're using a 00:24:35.507,00:24:41.046 corporate laptop, they try visiting uh I don't know what's a good xhamster dot com sure. Ah 00:24:41.046,00:24:45.818 and the Bluecoat says oh wait a second this is obviously a porn site we're going to keep you 00:24:45.818,00:24:50.823 from visiting this. Well here's the thing for that flagged domain it sets the HPKP header 00:24:53.058,00:24:59.365 for the Bluecoat cert, right? Now think about this for a second it's the Bluecoat cert, 00:24:59.365,00:25:04.002 it's not going to be available on the public Internet but that cert was just pinned for that 00:25:04.002,00:25:08.006 site so now you take your corporate laptop off of the corporate network and you still 00:25:08.006,00:25:11.176 can't visit the site. Now if you're a technical user you can of course blow those pins away 00:25:11.176,00:25:16.849 but I don't think your average accountant really knows how to do that. So now optionally if if 00:25:16.849,00:25:20.819 for whatever reason you can't afford Bluecoat like multiple instances of your own network 00:25:20.819,00:25:25.491 okay you can rotate the keys weekly at the gateway as well. Uhm but I figure if you are 00:25:25.491,00:25:30.529 considering using Bluecoat you can afford the licensing fees for them so hopefully by us 00:25:30.529,00:25:34.800 disclosing it this makes it prior art which means that no one can actually patent this and 00:25:34.800,00:25:39.805 make filthy money off of it. [Applause] >> So wait a second uhm so apparently Bluecoat is 00:25:47.479,00:25:52.785 now an intermediate certificate authority so we don't actually know how relevant this is to any 00:25:52.785,00:25:58.357 part of our talk. It might not be because the cert path is actually zero but you know do 00:25:58.357,00:26:01.427 with that knowledge what you will. We're kind of hoping somebody can go break ground and 00:26:01.427,00:26:06.432 see what they can do. So oh oh this one's fun. This one's fun. OK user tracking we shouldn't 00:26:08.467,00:26:15.040 really talk about this uhm but you know since this is DefCon let's track users. So here's 00:26:15.040,00:26:19.511 what we need this is by the way a very buildery talk but because it's kinda sorta tip toeing on 00:26:19.511,00:26:26.051 like the edges of ethics we'll call it breakery as well. So you need to pin a lot of sub 00:26:26.051,00:26:31.423 domains, we're talking like 32 sub domains that number probably sounds it makes sense if you 00:26:31.423,00:26:36.428 think about it. You also need browsers that pin well that that respect HPKP incognito and 00:26:40.098,00:26:45.037 finally you need that ability to do rapid key rotation to rekay on a routine cadence because you 00:26:45.037,00:26:50.142 need to be able to lock users out, right? So actually I got to go back for a second so we 00:26:50.142,00:26:53.612 actually need to thanks Lets Encrypt for this and again we'd love there might actually be 00:26:53.612,00:26:58.617 some Lets Encrypt guys in the room so come on clap. [Applause] >> Lets Encrypt has intro has 00:27:02.221,00:27:07.226 introduced a lot of novel like just by automating everything they've allowed us to do a lot 00:27:09.862,00:27:16.168 of novel things. Uhm we might have some amount of issue with some of the things we've been 00:27:16.168,00:27:22.174 able to achieve ah but at the same time we understand and love the vision for spreading TLS all 00:27:22.174,00:27:27.379 over the open web so T in this case Lets Encrypt really helped out ah just by virtue of the 00:27:27.379,00:27:30.983 fact that you can rekey on a very rapid basis and because it's free and because it's 00:27:30.983,00:27:36.388 automated and very very easy. So I'll let Ryan kind of talk about some of the configuration stuff 00:27:36.388,00:27:41.360 it's not really going to make all that much sense until you see Ryan's demo. >> Right so 00:27:41.360,00:27:46.064 this'll be, this explanation a little hand wavy and that's by design ah it'll make a lot of 00:27:46.064,00:27:51.737 sense when you see the demo. So on your server you've got an HPKP server with all of your 00:27:51.737,00:27:56.975 sub-domains ah star dot whatever domain pointing at the same server. You've got a set method 00:27:56.975,00:28:02.948 that returns an HPKP header and the check method that does nothing it's a no-op and uhm 00:28:02.948,00:28:08.720 you're just rahtoum routinely rotating your keys on that server. Then on your ah 00:28:08.720,00:28:13.458 Javascript to set a new ID you're just hitting those sub-domains in a random pattern 00:28:13.458,00:28:17.663 hitting set on them. And to check the ID you're iterating through all of them and hitting 00:28:17.663,00:28:23.569 the check method in all of them so in principle it's pretty similar to the HSTS super cookie 00:28:23.569,00:28:28.307 that I think Samy Kamkar ah came up with. >> Oh yeah >> OK >> I think so >> Hopefully don't 00:28:28.307,00:28:33.312 quote me on that. So we've got a demo of super cookie server set up at Cyph dot wang and uhm went 00:28:36.114,00:28:42.187 through quick demo. Sorry quick demo here in in our JS console in Google so Google did not 00:28:42.187,00:28:47.893 implement this we pasted in our JavaScript into their console. So you just run the super cookie 00:28:47.893,00:28:52.898 um JS for it there and ah it'll, here you can see it generated a new ID, it's just a just a 00:28:56.702,00:29:01.640 random 32bit integer 4565566. And now we're trying again in an incognito window on a totally 00:29:06.678,00:29:11.683 different site. And in this case uhm it, you can see a bunch of TLS handshakes, ah from those H 00:29:15.654,00:29:20.659 uh what from those HPKP suicide HPKP suicides and it regen reconstructed the exact same ID 00:29:22.794,00:29:29.134 there 4565566. If you look at the sub-domains you can see it's just a bunch of numbers dot Cyph 00:29:29.134,00:29:35.207 dot wang. We're using zero through 31 for Cyph dot wang ah just as like bits in a 32 bit 00:29:35.207,00:29:40.212 integer so we're literally iter iterating through all of them in a for loop. >> So what you just 00:29:43.048,00:29:48.053 saw was a way to track whether or not a user has hit your site in incognito mode. In blatant 00:29:51.423,00:29:56.395 violation of the fact that incognito or private modes are supposed to give you certain 00:29:56.395,00:30:01.500 privacy guarantees. Ah we've actually I think raised that up as a concern but it's been just 00:30:01.500,00:30:07.906 deemed as the the security benefits of H of both HSTS and HPKP have been deemed as much 00:30:07.906,00:30:13.111 more significant than than the potential privacy loss of doing so. Uhm I don't actually think 00:30:13.111,00:30:19.117 either of us have a problem with HSTS in that capacity but HPKP ah we personally think shouldn't 00:30:19.117,00:30:25.724 actually be ah necessarily respected in private modes or in incognito modes and in fact 00:30:25.724,00:30:30.362 there was a time I think in TOR browser so this is our big PSA right. Uhm there was a time in 00:30:30.362,00:30:34.399 TOR browser a few versions ago but we did confirm it like two hours before the talk that if 00:30:34.399,00:30:39.404 you haven't upgraded TOR browser yet, HPKP headers can actually be set in incognito mode and 00:30:44.743,00:30:50.215 respected in between sessions. Which means theoretically you could in older versions be 00:30:50.215,00:30:56.121 tracked across sites, assuming sites set this up. We don't believe that's the case now ah 00:30:56.121,00:31:00.726 we double checked and ah we believe that if it was actually an issue so again we're not 100% 00:31:00.726,00:31:05.497 certain. If it was an issue it's no longer an issue now so if you are using TOR browser please 00:31:05.497,00:31:10.502 upgrade to the latest version. So [Applause] >> Thank you. Now you have some risks let's say if 00:31:16.575,00:31:20.645 you want to implement this ah you have the risk of somebody else saying well hey this 00:31:20.645,00:31:25.150 actually kind of unethical so we're going try and DOS your tracking domains as a public 00:31:25.150,00:31:29.321 service, right? We agree this is actually pretty shady like the the actual implementation of the 00:31:29.321,00:31:34.526 super cookie like this is actually pretty shady. So if you really want to implement this, 00:31:34.526,00:31:38.530 ah you can always white-list domains that you want to track, you know for your own tracking 00:31:38.530,00:31:42.634 scripts but if you're going to offer this as like a sold service ah you can always just, 00:31:42.634,00:31:45.203 you know, issue nonce back, you know, nonce through a back channel to the app that's then 00:31:45.203,00:31:48.840 serving up the super cookie itself to your users that nonce is then sent back in from your 00:31:48.840,00:31:55.781 actual client back into the tracking domain itself and then your domain's going to say "Hey 00:31:55.781,00:31:59.551 wait a second, I've I do expect this nonce let me go ahead and server up the HPKP headers 00:31:59.551,00:32:04.489 themselves". So there you go some quick mitigation for issues that some people might say "Hey 00:32:04.489,00:32:08.927 wait a second you know I can kind of stop this service from working right now" so this 00:32:08.927,00:32:13.899 pattern is also similar to others that are actually actively discussed in the RFC. 00:32:13.899,00:32:18.503 The only catch here is that in the RFC a lot of the super cookie ideas rely on the report 00:32:18.503,00:32:24.075 URI construct which again isn't supported in Firefox so it's not as effective whereas this one 00:32:24.075,00:32:30.148 yeah it won't work if you use NosScript. Hopefully a lot of people do. Uhm but it will work 00:32:30.148,00:32:33.952 against a lot of your average users who don't even know what NoScript is. So we do have the 00:32:33.952,00:32:39.825 source for it uhm you can of course go and check it out ah up there or you can grab it again 00:32:39.825,00:32:45.497 off of the aggregate link. Alright this is the fun part we've got like what 12, 13 00:32:45.497,00:32:51.736 minutes left. So what if you wanted to be like a total jerk. That's like what half the room, 00:32:51.736,00:32:57.275 one person clapped, one person I heard it. Uhm so we really shouldn't talk about this but 00:32:57.275,00:33:02.414 who are kidding this is DefCon. So here's what you need to make what we're about to talk about 00:33:02.414,00:33:06.818 work. And we're not going to give you a novel way to break into a site we're not giving an 00:33:06.818,00:33:11.823 exploit there's again no exploit here, no unpublished full disclosure thing the last talk 00:33:11.823,00:33:17.095 we had this whole lecture on responsible disclosure. Please follow it, please. But that 00:33:17.095,00:33:22.968 being said this is a nice attack pattern and that's kind of the fun we're about to have. So 00:33:22.968,00:33:28.006 here's what you need, you need a high traffic target, I can think of many media organizations that 00:33:28.006,00:33:33.311 might be covering the elections as a good example. Ah you need a way to shell the box and you 00:33:33.311,00:33:38.316 need a free CA. Right so and again OK we we love Lets Encrypt again I know that there are some 00:33:43.054,00:33:47.692 people in here we love them give them another hand round of applause please just repeat it. 00:33:47.692,00:33:54.032 [Applause] >> Awesome. The absolute worst thing that you can with this as Ryan and I have 00:33:54.032,00:33:58.870 determined is what we're calling ran is like taking a site for ransom and we'll explain why in 00:33:58.870,00:34:03.608 a bit. So what we decided to call this pattern Ransom PKP you know in the culture of randomly 00:34:03.608,00:34:07.946 naming your attack patterns and vulnerabilities, right? Ah so what you gotta do is you have to 00:34:07.946,00:34:13.385 deter determine your target first, you have to generate what we're calling a ransom key-pair. 00:34:13.385,00:34:17.289 So this is, you know, just this is the key this is the key pair that you're holding within your 00:34:17.289,00:34:24.162 control. You're not even giving away the public key, you're just maintaining this. Oh people 00:34:24.162,00:34:29.768 still use pawn in hacker lingo today, right? >> I hope so >> OK so you have pawn the target 00:34:29.768,00:34:34.806 web-server, oh God I just said that out loud, uh you have to uh once you've actually taken 00:34:34.806,00:34:40.345 control of the web-server on the server using cert bot or whatever you want to use, your 00:34:40.345,00:34:44.249 automated script it doesn't really matter, ah generate your generate an actual what we're 00:34:44.249,00:34:50.455 calling lock-out key-pair. This is a disposable key-pair it's by design for it to be disposed 00:34:50.455,00:34:54.793 then send of the CSR and get your actual cert back and you mount the cert. Then there's 00:34:54.793,00:35:00.532 something else and a question mark. Uh are we allowed to use that graphic? I think we are. Ah 00:35:00.532,00:35:05.537 and some profit, we hope. So what's in the box? So while owned users is less than n, in 00:35:12.143,00:35:17.315 other words while you have yet to reach a certain number of users that you've predetermined 00:35:17.315,00:35:22.954 based on the size of the site that you're trying to hit, what you're going to try and do is, 00:35:22.954,00:35:26.324 you know what I'm not all that good at explaining it I'm just going to go ahead and let Ryan 00:35:26.324,00:35:31.830 do it. >> That's fine. Uhm let's see. So ah yeah I mean in the box ah while owned users is ah 00:35:31.830,00:35:36.201 less than n or just on some static interval like 8 point 4 hours we mentioned with 00:35:36.201,00:35:42.874 LetsEncrypt ah. >> It's acting up. >> OK there we go. >> Your your laptop is probably already 00:35:42.874,00:35:47.879 gotten owned. >> Oh. Uhm please don't own my laptop. Uhm so on that interval or after each of 00:35:50.815,00:35:56.821 those n users you just uhm rekey you generate a new lock-out key, you delete the current one, 00:35:56.821,00:36:01.960 generate a new one and throw it in the HPKP header. While the ransom ah key public key hash is 00:36:01.960,00:36:08.099 still in there. Is it. >> Yeah just go with that. >> Alright. Uhm and then each time ah like I 00:36:08.099,00:36:12.504 said you blow out the old key por meh blow out the old key pair, generate a new one this 00:36:12.504,00:36:17.509 locks out n users however many users hit that site ah during that interval and uhm that's 00:36:20.078,00:36:24.749 pretty much it. So the idea is that you would go beyond simple defacement of the website and 00:36:24.749,00:36:28.920 you would actually potentially monetize it. >> Did you want to re-key the site though? >> Ah 00:36:28.920,00:36:32.223 it's already ah no I already setup the timer so. >> Oh so this might actually work hold on 00:36:32.223,00:36:37.028 let's actually let's actually see if we're going to be ah dinged by the demo gods here. >> 00:36:37.028,00:36:43.601 Alright so anyone who went to ISIS dot io at the beginning of ah our talk. This is what you 00:36:43.601,00:36:48.873 probably saw or should have seen unless you were on Android as Bryant mentioned and ah now that 00:36:48.873,00:36:54.813 a rekey has occurred this is what you should see. Here's a TLS key pinning error. >> So 00:36:54.813,00:36:58.116 let's actually call this out. If you actually look ah we can't zoom into this one but if you 00:36:58.116,00:37:02.420 actually look the specific error should say "Pinned key not in cert chain." Assuming that the 00:37:02.420,00:37:07.425 key has actually rotated on time so essentially what we did here. Let me clarify how this attack 00:37:09.561,00:37:14.566 really works. You're holding the access of the users of the site hostage. That's the idea. You're 00:37:19.938,00:37:25.143 denying access to the many users of the site and you're basically saying "Look we'll give you the 00:37:25.143,00:37:29.581 ransom key pair well well yeah actually we will give you literally the ransom key-pair if 00:37:29.581,00:37:33.952 you for instance do whatever we want you to do". Ah that's that's essentially the premise 00:37:33.952,00:37:36.955 here. Now somebody could that's the worst possible thing that we've thought of that somebody 00:37:36.955,00:37:43.895 could do. Uhm if you gain access to a box then you can do quite a lot of things ah but let's say 00:37:43.895,00:37:48.299 all you get access to is just the web server. Right like typically just site defacement 00:37:48.299,00:37:53.671 is probably on your road map, you're probably going to put, I don't know, ah owned by some 00:37:53.671,00:38:00.478 hackery name. Ah but why do that when you can also monetize what you just did, right? So that's 00:38:00.478,00:38:07.018 essentially what we're concerned about here is now you've got HPKP and not just Lets Encrypt 00:38:07.018,00:38:12.557 but other CAs down the road that could enable this attack pattern. So we have some 00:38:12.557,00:38:19.330 considerations to think about though like meaning why this isn't a high severity issue. So 00:38:19.330,00:38:23.968 here's the thing, Lets Encrypt rate-limit is twenty certs a week. We mentioned this earlier 00:38:23.968,00:38:28.973 ah it's kind of an artifact of how they've architected the service originally it was like a 00:38:31.409,00:38:35.346 five rekey limit but if you reconfigure that cert package that you send for like the 00:38:35.346,00:38:39.384 actual key to get the actual cert, you can get it to be twenty for every single given 00:38:39.384,00:38:45.023 domain. ah so given that you can't actually rotate the key like every minute ah you you're 00:38:45.023,00:38:51.696 still bound to some constraint. Ah Chrome and Firefox also have HPKP lock-out mitigations ah 00:38:51.696,00:38:57.835 notable both parties have or are in the midst of reducing the max age, as I mentioned earlier, 00:38:57.835,00:39:03.308 from one year to 60 days. Chrome originally reduced it because people were bricking themselves 00:39:03.308,00:39:08.246 left and right implementing HPKP and were bricking access to their users to the site for like 00:39:08.246,00:39:12.150 a year at a time and of course these guys have no idea how to clear their key pins so they 00:39:12.150,00:39:17.789 figure you know two months is probably a lot more palatable. And finally you still need to 00:39:17.789,00:39:22.794 actually compromise the box. So ultimately the the conclusion by the teams was pretty much like 00:39:25.296,00:39:29.000 this. Ah Chromium they've indicated they won't fix it, they'll still keep an eye out 00:39:29.000,00:39:32.870 but they won't make any other programmatic changes because they've already reduced the max 00:39:32.870,00:39:38.676 age to 60 days which is fine. Firefox they've gone ahead and reduced the max age and I think 00:39:38.676,00:39:43.982 that's on its way to production right now. And Lets Encrypt has indicated that they won't fix 00:39:43.982,00:39:48.286 because they believe it's out of scope. Ah we actually understand this reasoning here. The idea 00:39:48.286,00:39:55.126 here being that spreading TLS is much more important than worrying about what could 00:39:55.126,00:40:02.000 potentially be an experimental risk so we we totally get it. Now that being said that puts 00:40:02.000,00:40:08.506 the onus on all the rest of us for ways to address this. Just as a reminder to myself, how 00:40:08.506,00:40:13.511 many people have heard of CAA? Right that hasn't changed. OK so what is it? Ah DNS Certification 00:40:16.814,00:40:23.521 Authority Authorization ah it's kind of a mouthful it's basically a black-list 00:40:23.521,00:40:29.594 while-list, it sounds messed up. Ah if you have the DN if you have in your DNS record ah the 00:40:29.594,00:40:35.199 CAA record then every single CA that hits your domain, in order to, you know, to be able to 00:40:35.199,00:40:39.537 confirm and make sure that it is authorized to give that domain a cert will say "Oh wait a second 00:40:39.537,00:40:46.077 it has a it it has the CAA record" right, and it says "I will now by default assume I 00:40:46.077,00:40:51.983 cannot give you a certificate for this domain unless that CAA has been white-listed within the 00:40:51.983,00:40:56.220 record". So in other words you're basically saying if you have this record for your domain 00:40:56.220,00:41:02.193 you are permitting certain sites, sorry sorry certain CAs to give you the certificate for 00:41:02.193,00:41:07.932 your site. So if you don't list Lets Encrypt or you don't list other free CA then nobody can 00:41:07.932,00:41:12.937 use a free CA or or some other unknown CA to issue the certificate for your domain. Now 00:41:15.940,00:41:20.144 alternatively you could also use HPKP ah we actually had someone who attended the BlackHat 00:41:20.144,00:41:24.315 version of this talk who said this sin't a complete mitigation but it does buy you time in 00:41:24.315,00:41:31.155 monitoring. Ah so if you monitor your headers for changes ah you the only way somebody can attack 00:41:31.155,00:41:36.427 you if you're already using HPKP is if they inject their own key into your headers and wait until 00:41:36.427,00:41:40.631 the max age is then expired before then dropping your key and beginning the attack. So 00:41:40.631,00:41:45.636 lastly you can also try just not to get popped but that is like the hardest of them all so. Now 00:41:47.839,00:41:53.144 what if you are an end user. Like an accountant that might get might be like a might be 00:41:53.144,00:41:58.249 like a bystander that ah gets ah hit by this. Ah you can always try visiting Chrome net 00:41:58.249,00:42:03.187 internals ah HSTS. Uhm but alternatively you can also clear I believe they fixed this. Ah 00:42:05.990,00:42:09.160 this used to be this was also another vulnerability that we found this was a quick 500 00:42:09.160,00:42:15.299 bucks. Ah you can also clear I believe your cache and that should also clear your HPKP 00:42:15.299,00:42:19.570 headers as well. Ah originally you could clear any aspect of your browsing history, including 00:42:19.570,00:42:23.875 saved passwords and because they misplaced a curly brace that would also clear your HPKP 00:42:23.875,00:42:28.880 headers. So yeah that's what we mean when we say a lot of these standards are implemented very 00:42:28.880,00:42:32.917 quickly and some of the testing isn't always complete. And in Firefox if you go into 00:42:32.917,00:42:36.821 about:config cert_pinning.enforcement_level and you set that to zero you hit 00:42:36.821,00:42:42.193 the site, take the new header and then re-enable you should be fine. So I think Ryan I think 00:42:42.193,00:42:47.198 you've got the source on yeah. >> Yeah and also we just wanted to be clear for any law 00:42:47.198,00:42:53.037 enforcement agencies in the room we did not implement or open source actual ransomware this is 00:42:53.037,00:42:59.977 just the basic POC that implements the re-keying and the whole lock-out process. It's 00:42:59.977,00:43:04.916 like a DOS. >> Right [Applause] >> So you definitely need to put in a lot of extra effort to make 00:43:09.854,00:43:15.326 this work. All that's here is just the technical details for how to rekey on a rapid cadence. 00:43:15.326,00:43:20.531 Ah we have a lot of hat tips. I'll just go ahead and read all the names out. Uhm crap I should 00:43:20.531,00:43:25.403 probably have remembered how to pronounce some of these names. Ah Geller Bedoya, Digicert, uhm 00:43:25.403,00:43:30.408 twitter handle EL_D33, ah Jonn Callahan, Jann Horn (and all of Cure53), ah Samy Kamkar, Jim 00:43:32.810,00:43:38.216 Manico, Mike McBryde, Jim Renny and his superb legal skill, ah Garrett Robinson, and John 00:43:38.216,00:43:41.953 Wilander, Doug Willson as well as the Chrome, Firefox, and Let's Encrypt security teams for 00:43:41.953,00:43:46.624 all their contributions to this talk. This talk was something like 5, 6 months in the making. 00:43:46.624,00:43:51.429 And for those of you who didn't take pictures, there you go, that's the slide ah take 00:43:51.429,00:43:56.834 pictures, ah go and check out the demos, ah you can also check out a lot of the stuff, uhm my 00:43:56.834,00:44:00.371 technical background here is that I advise Ryan and his company Cyph on implementing a 00:44:00.371,00:44:03.207 lot of the app sec stuff that they have there so if you want to see some of the stuff in 00:44:03.207,00:44:07.178 action you can always just check out cyph dot com or Cyph dot IM and see some of the stuff in 00:44:07.178,00:44:12.350 production. So uhm we actually have for those of you not fleeing the room if anybody has 00:44:12.350,00:44:16.888 questions seeing as we're the last talk of the day I will gladly come and throw bags of 00:44:16.888,00:44:21.893 popcorn at your face. And questions can be asked here at the floor mic. [Applause] >> Did 00:44:28.933,00:44:33.938 we pass >> I'm not kidding there's really good popcorn it's like homemade not by me but by 00:44:38.009,00:44:43.948 some other guy Microsoft purchased popcorn from. They just got way too much. Thank you 00:44:43.948,00:44:45.950 Microsoft. [Inaudible muttering]