So welcome. Thank you for joining us this morning at super not late or early. Somebody followed the 3-2-1 rule to the letter last night, didn't they? Sorry. My bad. Hey, but at least I got three. So in case you don't know where you are, we're doing Volns 101 in here. Hopefully everyone can learn a little bit of something. We all wanted to ask a couple of questions before we jump in further. Number one, how many people here is their first DEF CON? Wow. All right. Nice. That's pretty impressive. So how many... What was the other question we wanted to ask? So how many of you here are here for this talk? You've just started out in vulnerability research and you want to learn how to improve your game. Okay. And how many of you are not familiar with research but you're curious about it and so that's why... That's why you came here. That's about even. Okay. Good. That doesn't help us any, but thanks. Yeah, what we realized a little bit earlier this week as we were practicing was that we were trying to target two audiences. Newbies and then people who were curious about vulnerability research but didn't know much about it. And so sometimes for those who are curious about it, we might go into some terminology you might not be fully familiar with. We're just going to move on. We'll be glad to follow up with you. We'll be glad to follow up with you afterwards as well. Yeah. And neither of us hide on the internet. We're pretty approachable, so don't be afraid of us. So introduction, I'm Josh Jaduck. I go by on the internet. That's how you find me there. I've been doing VR for 20 years and so this 20 years actually includes several years as a hobbyist. And at one point I ran the high defense vulnerability contributor program where we would actually buy vulnerabilities from researchers and get them fixed and coordinate and all that stuff. So that's how I met Steve. And we did a fair amount of work together at that point. And we met through the CVE program, which I was a co-founder of, and led from 1999 until the end of last year, basically. There's a thing called responsible disclosure. I am a survivor of the responsible disclosure wars. I now call them coordinated disclosure. I coined the responsible disclosure term for which I will be eternally sorry. But it served its purpose. I started getting into coding. I had a classification of vulnerabilities as well, which is where CVE comes from. And I was also a participant in the development of CVSS version 2. One quick question. How many people in here have a CVE to their name? All right. Let's fix that. That was like a dozen, I think. Yep. That's pretty good. So why are we doing this? We want to fix that, right? We want more people out there doing research into vulnerabilities of software and hardware. Into all kinds of systems. Because as we've seen, there's lots of crazy things possible. It was very interesting to see the previous talk. The guy hacking the loyalty program. That's good stuff. It's fun. So what else? We have this little tiny picture on the slide on here. It's like tiny. If anyone works on it. You guys see the slides better than we do. So we're flying a little bit blind. If anybody works at Google Slides, you might need to work on this thing. We can move on. All right. Yeah. So just to get people involved. So disclaimer up front, right? This is our opinion. We did a lot of stuff with vulns over the years. Find them. Analyze them. All that stuff. And we just formulated these opinions. So they may not be right for you. Take that with a grain of salt. And remember that you're your own person and you've got to find your own path. We'll just try to help you see some of the stuff. So lastly, there's no new exploits here. Who came here to see new exploits? Okay. Good. Thank you. Okay. So first of all, there's a question about what is a vulnerability in the first place. And we'll start with what a vulnerability is not. One of the most commonly confused terms is exploit versus vulnerability. And a lot of people think that exploit and vulnerability actually mean the same thing. However, they don't. And exploit is really a sequence of steps that's used to take advantage of a vulnerability. A vulnerability is a problem within the code itself that just kind of more or less sits there, waiting to be exploited. These are almost circular sounding terms. But we face a number of difficulties in actually really defining these a little bit more carefully. And I think that's a reflection partially of the relative immaturity of the vulnerability research specialty. Yeah. I wish I could expand this picture for you. But I think what does Taylor Swift say here? To be loved is to be vulnerable. To love is to be vulnerable and to be loved is the greatest exploit. The greatest exploit. Right. That's a pretty good quote. So we go back and forth with this definition of what is a vulnerability. And there's so many different ways to define it. I think Steve did a great job already saying it. One of the biggest things really is that you have some kind of impact on a system. If you don't have some kind of impact or you're not changing the way things are working, it's pretty much not a vulnerability. A buddy of mine, Greg McManus at iDefense, taught me through heavy abuse of him asking me this question every time I taught him, I thought I found something. He's like, well, what do you have and what do you get? If what you are getting from this vulnerability, if you manage to exploit it, is not better in some way than what you started with, then you do not have a vulnerability. It's more like a bug or something really annoying. And this is one of the common problems we run across with new researchers who try and report CVEs or something like that. They find something that may be a bug or may actually be a feature, but it's something that is legitimately already allowed or somebody or with the privileges you already have, you can already go legitimately through some other route to obtain those additional capabilities. And so that's one particular point of confusion. Okay. Yep. So another important point to make about vulnerabilities is, as Steve knows well with his classification work in CVE, that there are many, many properties of vulnerabilities. Like I mentioned before, the impact is a really important one, but also the user interaction is an interesting one. These things are used by the defensive side to prioritize patching and strategies around defense. So these are just a couple of the important properties. Another interesting one these days, which is getting more and more interesting, is how hard is this thing actually to exploit? And I think as things have improved over the years, it's gotten much harder to do that. Yeah, it's become a lot more difficult to do that. And that's thanks to the defensive work and the buildup of many different kinds of protection mechanisms. And so there's almost like a Heisenbergian approach to interpreting vulnerabilities these days. Where something was clearly exploitable perhaps 10 years ago, it may wind up taking a whole lot of work. And that's one of the great things about the defensive side of understanding vulnerabilities, which is to really build in these systematic defenses. I have a whole rant on this. We can do some other time, but not right now. We'll save that for later. All right. And finally, we'll get to what is vulnerability research. And in this case, what we're saying is, we're going to do this. The process of analyzing a product protocol or algorithm. You guys are already ready. Or a specification to basically to try and find vulnerabilities or understand them, one or more vulnerabilities basically. So there are different kind of approaches, different kinds of products or specifications and so on that you may decide to look at. It's all more or less falls under the umbrella term of vulnerability research. However, the term itself is treated and interpreted a little bit differently. You might sometimes hear the term vulnerability discovery. And that's really intended by, used by people who want to distinguish from, let's say, academic strength research versus going and doing bug hunting and so on. So some people use that term. That distinction does wind up being important, I think, sometimes. But again, the terminology is still kind of emerging and we don't have a lot of agreement. Well, again, we're not doing exploits here. And I personally think exploits and exploit development and stuff like that falls into VR. As do I. Yeah. But it's not really our focus here. So let's keep going. It's really about solving puzzles where you don't even know what the puzzle is in the first place. You don't even know if you'll find a puzzle. Then maybe you find a puzzle which leads you to other puzzles and so on. That's one of the big attractions to me for vulnerability research. All right. So why do it? In case you're very new and you're just curious, these are some reasons why you might want to get motivated to do the hard work that is vulnerability research. I can't read this stupid slide, man. Hooray, Google Slides. I'm serious. It's so tiny. I hope there aren't any Google. The speaker notes are the whole screen and the slide is like a little tiny square. I hope there aren't any Google people in the room. I hope there are so they'll fix this stuff. One of the big points to note here as you look at this nice little word cloud is go ahead. That's okay. Let's just go to the next slide. The main takeaway from that previous slide is that there are many different motivations that different researchers have. And your motivations may not be the same as others. And in addition, when you're dealing with vendors, vendors may have only experienced or they may only assume certain kinds of motivations from you. And so that potentially causes certain difficulties when interacting with vendors. I personally like just about all these words on here. So there's a lot of different careers. Steve, you want to talk to that? Yeah. So there's a number of different careers, but it's not like there's a career shop that you can go to. This is still a new field. To my mind, it's a new field. It's a new way of thinking. We're kind of entering the second generation. We're sort of the first generation. We're trying not to skip one. We're trying. And it's really good to see a full room here actually because we need a lot more people doing vulnerability research because we've only seen the tip of the iceberg. But what's called vulnerability research, it may vary, but there's a lot of different things that one can do. Yes, you can go and you can hunt bugs and hunt vulnerabilities. Other people may really like building up eggs. I'm not sure you're going to be able to catch those wives and chicks. I'm not sure you're going to be able to catch them, but it's a kind of fun thing. So that's something where we're not going to focus much on vulnerability research. And I would just add some things, too. Just to answer some of the other questions I have that are not with me, I wanted to know if there were any specific questions that you would like to be able to ask? Yeah. I do think it's important to set a framework for vulnerability research where you can figure out what your community's capabilities are and what you want to do with that. I think what you want to do is take in the data, get in the knowledge, and then you get a really good summary of what your potential vulnerabilities are. Yeah. The question there is, what are the problems with vulnerability research? could be used to help people to coordinate and so on. And another option is to really work on fixing them. You could be working at a developer somewhere and see a vulnerability that's been reported and figure out actually how to fix the code. And I guess one note that just popped in my mind here is that there are a lot of vulnerability researchers who discover vulnerabilities but have no idea actually how to fix them. It's a completely different mindset. You can have a really solid development background. You need a solid development background to build a good fix, but that's not necessarily the kind of skill you need to do vulnerability research. Yeah, that's right. One thing I will add is that if you do start doing any of this stuff or find a job doing this stuff, you basically have unlimited job security for the foreseeable future. So there's a lot of different employers that you basically could have. I mentioned you could work at a software vendor. You could work for a government organization or a CERT coordinator. You could work at a commercial enterprise, whether it develops security products or whether it does consulting services. However, these days, pretty much every business that's out there is more or less a software developer. Think about Target. Think about other kinds of brick and mortar companies. Those all develop software, either in-house to help them manage their operations or externally with respect to websites and so on. And as you all have probably heard, there is a huge demand. And so these are some of the options that you could look at. And you would be most likely welcomed with open arms by someone somewhere, because we need a lot more researchers. I'm a little partial to the bounty programs. They're really fun. And if you have a good employer, like I'm fortunate enough to do, then you can keep going. Kind of keep that bonus money and throw parties at B-Sides and stuff like that. Cheers. So yeah, let's do the next one, huh? So these next couple of slides is really kind of a little bit more of a disclaimer than normal in terms of our opinion. But based on our personal experiences, and I've actually done some vulnerability research myself. It was a while ago. But based on our personal experiences and interacting with other researchers, there are a number of personality traits that generally seem to be useful for longer-term success within vulnerability research. So for example, a willingness to work independently, a willingness to learn, being very critical thinking. You always have to be more or less questioning your own assumptions. That's a good point. I don't even think that's in the slides, but that's a really good point. And really, it's primarily a solitary effort. You need to be deep. You need to be deep. You need to be diligent. And you see some of the other features there, basically. But two of the biggest personality traits that we believe are important are patience and persistence. Patience is essential not only with yourself and with the process of discovering and investigating these vulnerabilities, but patience when dealing with others, especially when dealing with, say, vendors that might not necessarily be behaving exactly the way that you would want to when you're trying to communicate. So those were some of the should-have personality traits. These are some of the ones that we think are nice to have. Still a greater formula for success here. To really be able to be focused, to seek to improve software, which is a common thing. The ability to collaborate, whether that's to collaborate and work with other people, is something that we believe is important. There can be many rock stars, and not so often, but there can be many rock stars and not so many. rock stars that don't work well with other people but that oftentimes especially if you're just starting out I think is probably a career limiting kind of attitude that one would take and we also have here a notion of having kind of an addictive personality so for example at CVE you know I stayed at CVE for 16 years through 70,000 vulnerabilities now I didn't investigate and look at all of them but you could say that that might be kind of indicative of an addictive sort of personality and Josh you know how many days weeks or months have you spent on a say a single bug? I don't know I stayed probably spent going for a long time I think it's at like maybe one year but yeah it's not all at the same time so so you know none of these personality traits here that we're talking about is absolutely essential each of you will find your own path but if you feel that you have some of these personality traits then you might find vulnerability research enjoyable. So I think we're going to be totally screwed on this slide because it's so small and over there so we have a number of different probably read it fine we can't read it at all we have a number of different skills that we've sort of listed here for long term success but I would say probably that some of the some of the biggest ones one is about analysis tools and and findings so not this one is that yep we can skip it on the next one too um all right I think the big one we wanted to say here was about communication I think we made that pretty clear uh yeah alright so here's another awesome wall of text slide that we put together and we don't want to read it to you but uh these are some of the key terms that we feel in vulnerability research and of course the slides will be available um you know if you're if you hear us you've probably already heard us use like some of these terms um but when it really comes into doing analysis and and deeper research like some of the stuff like root cause analysis and vulnerability chains and classes and especially proof of concept code become more important. I think one of the key terms here that I guess we touch on it a little bit later as well though is the notion of root cause analysis. This is where diligence and critical thinking comes into play. You might discover something that's like a symptom of a problem and it's really when you become tenacious and dig deeper and deeper into it to find out what's really causing the problem in the first place where you may find some significant success. All right. So in the industry and many of you probably if you're interested in vulnerability research already know about this thing we call the fire hose and that's basically just a steady stream of information about vulnerabilities that's coming from all angles. It includes stuff like some of my favorite stuff like CTFs and wargames. Where you can learn at your own pace and just lots of aggregation and other places. If you want to learn more about vulns look at these things for sure. There are a couple items that are not on that list there that I think came up during this week and one of them actually is the pony awards because the pony awards often talk about individual bugs and typically those additional bugs have additional details. And then another area is bug bounty. There are bug bounty programs which can help you to learn and interact with others. Actually by a show of hands how many people are in or have participated in bug bounty programs and gotten some kind of reward? It better not be too much more than how many people have CVEs. Okay. Moving on. I guess there is that rule about CVEs and websites, right? Wow. I'm going to bend over and stare really close. I'm glad I'm not getting the little tiny Lee in it. So selecting your target. There are a lot of choices if you want to find bugs somewhere. This is kind of on the vulnerability discovery side. You can go deep or you can go broad. And what we mean by that is you can pick one particular type of vulnerability or something and go look at every software you can find to see if it's vulnerable or you could pick one particular software and just drill down until you find something. There is a lot of software that has more or less low hanging fruit. And if you want to expand on that a little. I think the most important thing is that you have to be able to see what the bugs are. I don't know if it's on this slide or the next one. So another big point I wanted to make here is if you do do some vuln research and you find nothing, it's actually quite useful for people to know that somebody looked, even if you found nothing. So that's one point. And again, low hanging fruit, a lot of older code is buggy. Complex or overly complex stuff is very interesting to look at, although a lot of times you just get lost in it, just like the developer did. Large attack surfaces like web browsers are always fun to play with, you've got a lot of possibility for things to go wrong there. Software popularity matters, so if you're going to try to become super famous and you want to go find some vuln in something, it's probably better to not pick a random personal website project off of SourceForge or something like that. But on the other hand, if you want to find something in a super popular product like Microsoft Windows 10 or something like that, it's probably better to not pick a random digital server or something, it's probably going to not be anywhere near as easy. Not necessarily anywhere as easy because that really popular software has already been pounded on and pounded on by many people, by elite researchers and so on. So the lower hanging fruit, the kind of software that doesn't necessarily have any vulnerability history at all, or that no one has really looked at before, that's often an area where you can find some success very quickly. Yeah. I don't know if it, yep. So one thing I like to do sometimes when I get super stuck is to go and pick on somebody lame. I think this is kind of popular in the VR industry where we just need that redemption and we feel good about ourselves again. But the problem with that is, you know, like a good example is like Open Office or something. It's pretty easy to fuzz that and it's full of bugs and nobody really cares about them too much. And so you can go find bugs there, but then you deal with the secondary problems, nobody caring too much, so. So brand new emerging technologies is always a great place to look. Many people in phone research like to wait until a thing becomes very popular and therefore when things are emerging nobody's really paying attention. I think we can say that about IPv6. I think there's like maybe a handful of IPv6 researchers around, even though that's sort of slowly becoming a norm. Let's see, mobile and IoT are definitely not guilty of this because, you know, as they tried to hurry up and get to market really fast they didn't invest in security. And what we're hoping is we don't repeat that mistake with IoT, but we'll see. One suggestion we do have which would be very useful for the entire community and for contributing to the body of knowledge is that you have access to software or products that are very difficult for the everyday person to get access to. Say, you know, multimillion dollar enterprise. Software or expensive medical devices or other kind of physical devices. You know, those aren't things that just everybody can go and grab and look at. So not only might you have some good chances of success in finding vulnerabilities in those kind of products, not a lot of people have access. Who knows how to do that like magnifying glass thing on OSX? Anyone? Nobody? You want coffee or something? All right. Well, I'll just stare at it. I'll just stare at it really small again. So you got that one. So something that I've seen a lot and that Steve kind of like coined a term here on this pig pile effect. It's pretty interesting. That's one way to select your target. You see people beating up on something through advisories getting published and you're like, well, hey, maybe I feel like there might be more there. I should go take a look there and maybe do some follow-on work. I encourage the community with this one on stage to write heavily. It's good to have more people looking. So tools and techniques. There's kind of like these two main ones that are really kind of high level, design review and threat modeling. These are, I think, really important for anyone who's developing software to have this as part of like the cycle of figuring out how to stay secure or how to basically stop having alarm bells ringing all the time. Dynamic versus static analysis is very important to different types of software. It's very important to differentiate depending on what kind of stuff you're going to do. Like on the malware side, static analysis is a lot more popular. With vulnerability research, a lot of more dynamic analysis seems to be more popular. But I think the real power here is when you have both together. One of my personal bug hunting processes is to start writing a fuzzer and then just let it run while I read the code. And as soon as I learn something more about the code that will help the fuzzer be good, then I'll add it to the fuzzer. And I'll just keep doing that back and forth. So, code auditing and some of these other automated tools like static code analyzers, they're great. But a lot of times they have false positives or they have other issues. And so, it's just important to be aware of the tradeoffs of kind of all of the tools and techniques when you start getting into them. I really think that a tool in this industry is the embodiment of a technique someone developed to a large degree. Yep. And I agree with that. And while we have a number of tools and techniques listed here, that doesn't mean that you have to know all of them and be expert in all of them in order to find any kind of success. This is part of your path. But we do recommend, I would say, to at least investigate and look into each of these. Everyone kind of has their own sort of favorite techniques that they like to do. This one's you, man. So, as the field becomes a little bit more mature, this one's a little bit more interesting. So, we're going to have to look at some of So, I'm going to start off with the one that you're familiar with and ties in, obviously, with vulnerability management overall. There's a number of different relevant standards that you should familiarize yourself with and utilize wherever you can. One of the main ones is the Common Identification Scheme for Vulnerability. It's called CVE. And for those of you who've who've had certain questions about CVE, especially in the last year or so, with concerns about coverage and what MITRE is doing. While I did leave CVE last year, I'm still at MITRE, and we do have one of my colleagues here who is carrying the torch, as it were, and would love to talk to you, and so I want him to stand up here. That's Dan. He will be... Hey, Stan. Good. We need to talk, buddy. So he will be available, and he would love to talk to you. Not all of you at once, but, you know, a few at a time. Another effort is the common weakness enumeration. This is... When you have these different vulnerabilities in different products, well, it turns out that programmers make the same mistakes, and many different programmers make the same mistakes. And so CVE is effectively a classification scheme for how programmers wind up making these kinds of mistakes. It's useful in two different ways. One, as sort of a common identifier for characterizing what the mistake is that you found, but it also winds up being very useful as a dictionary or as something to educate you. So, for example, CVE covers 800 different kinds of mistakes, that programmers can make. And as much as you think you might know about everything, I guarantee you, there's one or two things in there that might surprise you or that you might not have expected. And if you're even just starting out, you get good information from things such as OWASP, but CVEs as well for stuff like SQL injection and cross-site scripting are also pretty mature. Equivalent for attacks is called KPEC. And then CVSS is a way of being able to consistently apply a risk-related score to a particular vulnerability that you found. So it may be your favorite vulnerability. You might be in love with it. You might have worked really hard, but you need the cold, objective, or reasonably objective eye of CVSS or something like that so that you can communicate its importance effectively. Thank you. Hey. Sorry about that. I was messing with the slides. I was trying to zoom in on this little tiny thing, and it zoomed everything. So that doesn't work. So disclosure models. Disclosure, disclosure, disclosure. We're not going to go into specific details, but there are a number of different models that you could consider and figure out more or less what works for you. I do, and I think Josh agrees, that we both suggest using the coordinated disclosure model, which involves really working with the vendor in order to try and reach some results. So that's the first one, But there are other models as well, such as full disclosure. As soon as you find it, you sort of put it out, independent of whether or not the vendor's been given a chance to patch. And then there's also non-disclosure. Some people may choose simply not to disclose the vulnerabilities or to only provide them or in some cases sell them in limited markets. But this is, these are different things that you're going to need to consider as you move more into vulnerability research. You may have any number of different, different approaches and beliefs in why it's important to do public disclosure. But I think the more we know, the better we know, all of us collectively. And finally, there are a couple different standards or standards like documents that will give you some guidance with respect to coordinated disclosure or equivalent models that you can follow and that you can provide or give advice to vendors who may not be used to it. Just so you know, there are a lot of different standards, but there are a lot of different ways to handle it. So, I think the most important one is the International Standards Organization . Most importantly is International Standards Organization , document number 29147 which was done by Katie Masouras and others. International Standard, it is something that is directed towards vendors, which explains to them how to build up a process for responding to vulnerability reports and for interacting with the researchers. So, as a survivor of the disclosure wars, I'm very, very happy to see standards like that 29147 come out. And yes, it did take me six months before I could start rattling off that number. If you start to get deeper into building your vulnerability career, so to speak, then you may have different kinds of considerations for building your own disclosure policy. Based on your own experiences and your own beliefs, you want to start evolving certain kinds of considerations for what you're going to do in certain kinds of circumstances. So, what would you do if you try and contact a vendor and you can't find any contact information? Or what happens if you're working through a process and then suddenly an event occurs? Or what happens if the vulnerability is released by somebody else as a zero day or something like that? There's a lot of debate about how long to actually give the vendor before they fix the vulnerability and push it out, push out a patch. Some say 30 days, some say 60 days. There's 90 days or however long it takes. These are some of the questions that you need to ask yourself. Yeah. What? Let's... I forgot the point. I forgot what it was, though. Let's skip this one. Oh, I think I was just going to say that sometimes the disclosure process you choose will even vary, you know, based on individual vulnerabilities. Some people decide not to disclose things that are not super awesome. All right. So, we got 10 minutes. We lost a little time to the technical difficulties. Yeah, yeah. So, let's... We can move on, I think. You want to skip that one? Yep. All right. So, let's talk a little bit about advisory structure and contents. I'm so not going to read these bullets to you, but like structured content is very useful. I think Steve and I collectively, well, we probably read a lot of the same advisories, but collectively, he's probably read thousands and thousands of advisories. I'd agree with that. And it's like some of them were really horrible. There's this really offensive group, and when I say offensive, I mean when we read it, we get offended. I don't mean they use bad words or anything. Yeah. Not to name names, but yes. Yeah. I'm not naming any names. If anybody reads advisories, they'll figure it out pretty soon. So, these are just some fields and some guidance that we have for making advisories. And of course, there's some more here. One of the big ones is proof concept code. I think it's a really important sort of thing to prove your case. When you disclose a vulnerability to a vendor, a lot of times you get pushback like there's not even a real issue here. And of course, on the Metasploit side of things, it is a little bit hard to argue with a shell, but you don't necessarily have to give somebody a shell as your proof of concept. It could be whatever you choose. It could be a sequence of steps that they managed to follow to verify it. It could be any kind of level of maturity proof of concept code, crash proof of concept concepts. But do remember, like, the more sort of detailed information you can learn and extract and provide to them, the easier it's going to be for them to deal with that information. So, one of the reasons that we have these particular details here about advisory contents is that a lot of researchers, especially beginning researchers, don't necessarily know what information to provide. Or you might submit a bug report to a bug bounty, and it comes back and says you're not providing enough information or you're not communicating clearly. So, those fields that we have listed and, you know, are on the slides and will be on an updated version of this slide, we encourage you to really look at those and consider seriously capturing all of that information. There are some pros and cons that we came up with. This is basically just Steve and I kind of ranting on all the stuff we didn't like about various advisories. You know, we want people to do simple stuff. Plain text is real easy. It's very portable. It's a very low attack surface. So... As opposed to, say, PDF? Yeah, which is like a web browser, basically. And some people do like to do videos, and we think that's kind of cool, but I have a couple suggestions here. One of the main ones being is, you know, respect the viewer of the videos. You don't want to make your videos too long, but you don't want to go too quickly either. And there's a couple other considerations up there as well. So you want to be mindful even of the format in which your advisory goes out. This one's for me. Okay, so what to expect from vendors. I already mentioned some of this stuff. You can expect total cluelessness. You can expect, in some cases, for people to threaten you with their legal teams. I don't know why they do this. I think they're confused. But, you know, these are just a bunch of possibilities. Most good companies these days, especially the bigger ones, they're very open to work with, and actually, amazingly, even sometimes a new vendor that's never had to deal with these problems comes out actually understanding it quite easily and being very good to work with as well. But one thing to keep in mind is that it's not like one size fits all all the time, and every disclosure winds up being some kind of unique snowflake. So you need to be able to be patient, as mentioned before, and also be able to be flexible. Keep an open mind. So on the first, I think the first bullet, like one time at iDefense, we were trying to report this phone, and I tried phone calls. I tried email. Well, on the other order, of course, tried emails and then phone calls. And finally, I got a response from the security guy there when I faxed him the advisory. Yeah, I guess. And what year was that? It was probably 2007. Yeah. Yeah. Yeah. So they were like, oh, this came out of the fax machine. Ah. Let's call them. Okay. So where do we disclose publicly? What we like to see is people to disclose publicly in places that are archived forever. So that becomes mailing lists, basically. And these other things on the list, like exploit DBs and vulnerability databases, those sites are great, and we'll hope they'll live forever. But anyway, I think that's it. In some cases in the past, they have not lived forever. And ultimately, those sites generally are pulling from more public sites that are archived forever anyway. So this is just our preference. If you want to put your stuff on your blog to gain readership, that's great. But maybe also throw a note up on one of these lists to get the traffic to your blog as well. So common mistakes to avoid. Number one is don't test other people's stuff. Unless they let you. I think there was one case with the Facebook bug bounty program where a guy, like, owned the hell out of them, basically, and then tried to do a bug bounty with them. And that, I don't think, worked out really well for that guy. We have here on this slide and the next one, we aren't going to go into details, especially because we're running low on time. But there are a lot of common mistakes that many researchers make, including ourselves, actually. And so why don't we move on? Let's move ahead. Yeah, absolutely. If you guys want to hear some war stories, just hit us up afterwards. We have some real meat at the tail end of this presentation. Yep. Okay. So this is one of the main ones here. This is just sort of our own model and our first crack at this. As far as we know, nobody else has really sort of started on this. But we're trying to outline different kinds of stages of growth that you might encounter in your career or in your technical abilities. And so we're going to talk a little bit more about that in just a moment. in your technical abilities when you're doing vulnerability research. So when you're just starting out, you're starting at more or less a newbie phase. You might know just like one crude technique that you apply against, you know, easy software. And maybe you make a lot of different mistakes. At some point, once you become more familiar with things, you may reach what we call a workhorse stage, where you know a number of different kinds of basic vulnerability types. You can generally find multiple issues. And then you start to really more or less get a hang of certain kind of processes. Then when you start to move more towards the subject matter expert, these are the times when you're really like watching and looking for the newest and latest techniques that other people develop. Or you might go and extend those techniques that have already been reported. And you're more or less pretty much treated or assumed as being reliable by, you know, people around you. People like me and so on. And at some point, you start to really have a clear sense of like what your own disclosure policy is. And you can be relied on to find a lot of things and to write a really good solid quality report. And finally, upon reaching the elite stage, which not everyone needs to reach the elite stage. And not everyone has to. And not everyone wants to. And that's perfectly cool because there are way more vulnerabilities out there than there are vulnerability researchers to handle them. But what I think of a little bit as an elite researcher is you discover or invent new vulnerability classes. Or you develop entirely new techniques. Or, you know, you give conference talks and so on. You really sort of push the industry forward. And that can take a number of years. You're not just going to read a book or look at a couple blog posts. And be elite tomorrow. Or next year. So that's something to keep in mind as well. Yeah. It takes time. For sure. I can't read this again. That's got my name on it anyway. So you're free. I think we've got only a minute or something like that. Yep. So one thing about the notion of growth is that there's a book by Malcolm Gladwell called Outliers. Which basically says. It takes around 10,000 hours of focused effective practice in order to reach a level of expertise. And so you can do the math. And that number may be questionable. But that's something really to keep in mind. But there are a couple different ways that you could progress a little bit further if you want. I think we have like three slides left. Yep. Do you want to do this one or do you want me to do it? Okay. It's got your name on it. I'll do it if you want. Okay. Getting tired? No. We're getting an X from the goons. Get out of here guys. You talk too much. So we just wanted to leave on this note about feelings and fails, right? We mentioned we are not perfect. I believe there's this thing. This is what I call the human condition. Which basically means you always make mistakes and have to deal with things that your body tells you and such. But feelings are definitely a part of that. So remember feelings are all part of that. Feelings are okay. There are a number of times when you're doing some deep research and you get very discouraged. You might want to find something easy to do. You may want to go at it a different way and maybe just go to the beach for a while. So another one of these things is you feel like you've really got to keep going and you want to work really hard because you're addicted to something. But it's been like 17 hours that you've been working on it. It might be a good time to sleep. So, yeah, I mean, feelings are okay. And failures are okay, too. Here we go. Thank you. Thank you, everyone. And we are available to talk to anybody afterwards down at Knox. Yeah, we're going to go down the escalator somewhere there.