Secure AF - A Cybersecurity Podcast
Think like a hacker. Defend like a pro.
Welcome to the Secure AF Cybersecurity Podcast — your tactical edge in the ever-evolving cyber battlefield. Hosted by industry veterans including Donovan Farrow and Jonathan Kimmitt, this podcast dives deep into real-world infosec challenges, red team tactics, blue team strategies, and the latest tools shaping the cybersecurity landscape.
Whether you're a seasoned pentester, a SOC analyst, or just breaking into the field, you'll find actionable insights, expert interviews, and unfiltered discussions with Alias team members and top-tier guests from across the cybersecurity spectrum.
Stay sharp. Stay informed. Stay Secure AF.
Secure AF - A Cybersecurity Podcast
AI’s Inflection Point: From Productivity Tool to Existential Risk
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Got a question or comment? Message us here!
Artificial intelligence is evolving faster than most organizations, and regulators, are prepared for. In this episode of the #SecureAFPodcast, we sit down with Chris Hood, a veteran technologist and financial industry leader, to explore how AI has evolved from early computing to today’s large language models and agentic systems.
We discuss real‑world AI use in highly regulated environments, the benefits and risks of agentic AI, growing concerns around AI security and alignment, and why some experts believe general, and eventually superintelligence, may be closer than many expect, even if we’re not there yet.
Along the way, the conversation takes a few intentional detours, as two seasoned technologists reflect on decades of computing history and how past technology shifts help frame today’s AI inflection point.
From practical productivity gains to long‑term implications for security, jobs, and society, this conversation goes beyond hype to ask the hard questions security leaders should already be considering.
This is Part 1 of a deeper discussion on AI, risk, and the future of human‑machine collaboration.
Dive in here: secureafpodcast.com
Watch full episodes at youtube.com/@aliascybersecurity.
Listen on Apple Podcasts, Spotify and anywhere you get your podcasts.
Could chase cures rather than treatments. Right. You know, we can get away from doing things because it there's a financial incentive to do it. Yeah. And instead we're going to do it because it's the right thing to do for humanity. Right. You know? Yeah, yeah. I I'm with you. That's that's a world I want to live in.
SPEAKER_01You are now listening to the Secure AF Podcast.
SPEAKER_03Welcome to the Secure AF Podcast. I'm really excited about this one. I'm excited about all of our podcasts, but I'm really excited about this one. Uh, we have Chris here, and we're gonna be talking about AI and the industry and kind of where we're going, what we're doing. And and we we've been talking for the last hour or so, and we should have recorded all of that. So we're gonna try to recreate some of that. Um actually, probably what we're gonna do is gonna go off some more tangents, but it's gonna be fun. So, Chris, thank you so much for joining us today. And I'm excited. Uh, let's get started. So, first, tell us a little about yourself.
SPEAKER_02Well, I've been a computer geek since I want to say the late 70s. I bought my first computer in 1978. Wow. So um, but ever since then I've uh been in the computer world one way or the other, uh, whether it was computer retail, working up to um becoming a part of the banking finance world around 1994. Okay. And so I've I worked for financial data processing companies, fintech companies before the word fintech was a thing. Um and then since then uh I've been with uh one of our local community banks uh since uh 2008.
SPEAKER_00Okay.
SPEAKER_02And uh or 2009, 2009. So coming up on 17 years. Wow. And uh but yeah, it's uh it's been an interesting ride.
SPEAKER_03So before we get into any AI, you know, you've you've been in technology for a fair amount of time. What has been some of those big high points of technology that you have seen just kind of hit and then we skyrocketed up and we hit again and sky. So what have been those things that have really caught on you of uh, hey, that was really cool. That was that was historic.
SPEAKER_02Well, I mean, the obvious ones obviously uh I remember uh the good old DOS days, you know, where computers were amber screens and green screens. Uh yeah, and then the very first VGA screens, which blew us away back then. But you know, uh obviously, you know, moving on to um the internet in you know the early 2000s, um and then or I'm sorry, 1995 would be when the early inter and the early internet um and uh just seeing it all develop ever since then. Yeah, you know, I was around in the the financial world for Y2K and got to see that firsthand. Um You know, most of the people that's gonna be watching this weren't even alive. I know. I have my assistant, um he is 26 years old, and so he was born in 2000. And so yeah, it's it kind of blows both our minds when we talk about that. Yeah.
SPEAKER_03So yeah, it's interesting. It's amazing. You know, you think about you know, you said DOS, and when DOS came out, that was pretty cool. I mean, it was very different. It was. Um and then of course you had your GUIs, you know, Windows 95, well, Windows 3.
SPEAKER_02I got to see early Windows, early Windows 3 and Windows 3.1. Yeah, you know, when it was you know being talked about in the trade journals. Yeah. You know, it wasn't actually a thing yet, but you looked at it and it looked okay. We had the Macintosh and it was like that.
SPEAKER_03And you couldn't see it unless you saw it in the journals, because it's not like we didn't have the internet that we could see that once in a while you might see something on like a news thing, or there were some computer shows out there, but most people didn't watch them at the time. No, no.
SPEAKER_02So we were very much a niche, you know. We were we were the geeks. Yeah, yeah, yeah. Still are.
SPEAKER_03Yes, yes. What I what I found is um, you know, in my time, I'm a little bit later than you are. Um I was born in that that same time frame, 77, 78, around there. So I got to see it. My very first computer was a family computer, it was a 286, like 35 or something. Then my first computer was a 486.50. Nice. Yeah. Uh it was a graduation present from my brother. And uh, I mean, I don't I have it at home. My intent is to actually get it back up and running and and and play some Wolfenstein on it and just fun stuff. Sure. Um, but I mean it was probably a$4,000 computer at the time. Oh, yeah. I mean, it was had a CD, it had a CD ROM, but it also had a floppy drive. And I remember installing Windows 95 on like 20 different floppy disks to install it. And uh and it wasn't probably two or three years before we started having the Pentium chips come out, right? And everything just kind of exploded.
SPEAKER_02So it's uh interesting. I mean, I you know, I started off with Atari computers, you know, Atari 800XL was my first. Yeah um best money I ever spent at the time. We'll talk about more of that later. Sure. Uh but you know, eventually I moved into the the PC world um long before, you know, modern, you know, modern gaming on PCs. Yeah, you know, uh PCs were just just absolutely interesting at the time. Yeah. And my machine uh started off, it was it was this natural the ship of these type situation where the you start off with this computer and then you replace this part and you replace this part, and eventually you've replaced the entire computer multiple times over, but at what point did it stop being that PC? Yeah, right? So yeah, very you know, very cool times.
SPEAKER_03Yeah. I remember we had a uh well, of course, my 486, I ran that through college. Then I bought this tiny little laptop, it was a Mitsubishi Amity 2, and it had Windows 95 on it, but it was by CD-ROM. So it was a slightly different version of it, right? And this little tiny screen, but man, I in college, I was the only one in the room that had a laptop that had a computer with them actually typing on stuff. Um, and it was like a 233 or something. It was it didn't have a big processor, but it didn't need a big processor for the time. I mean, you know, it was exactly. I had a what was fun about it is I had a parallel port and a serial port on the back of it. Actually, a flipped-down lid that I could plug into a parallel printer. Yeah. So I could walk up to a printer, unplug it, plug it, and print my my paper's out. Yeah. Um, I still have it somewhere. I don't have to get it because people when they when we talk about the well, what came became netbooks, you know, a few years ago, right, or these little microcomputers, that wasn't the beginning. You know, you take those old Toshiba laptops were huge, had a rollerball on it. Um, that's what everyone had in these little tiny Mitsubishi. The Mitsubishi company had a Mitsubishi um emblem on it. Right. They they pushed out these tiny little things and they were great little computers. And I sure miss those days. It was simpler back then.
SPEAKER_02There was a time when uh I was working in financial data processing and the president of the company, he, you know, there there's a thing with with um company presidents where uh they want laptops more than they want anything else.
SPEAKER_00Yeah.
SPEAKER_02Um there was a time, I mean it's not that way anymore, really, but there was a time when a laptop was a prestige type thing. Yeah. And we got him the very best Toshiba laptop on the market. Uh huh. It was a$9,000 laptop. It's not worth a plugged nickel today. Yeah, you'd be lucky to run anything on it if you, you know, if it would even boot. Um but yeah, absolutely amazing the the difference, you know, that over time.
SPEAKER_03Um But you know, that's the thing is I bet you that was that was really hot stuff for about six months.
SPEAKER_02Yeah. It was a color laptop when that was brand new. And that was what most of the cost was was the just the fact that you had this color laptop. So yeah, absolutely amazing.
SPEAKER_03Wow. Yes, we're old, but you know, it's actually kind of cool to look back on those computers and think about uh uh one of our engineers was uh he's he's building effectively like a cyber deck or something equivalent to that, and he really wanted an old chassis. And he there was this uh and what he was choosing on we were looking through eBay and stuff like that one night to fun. And what he was selecting, and what he actually did select, I remember seeing those. I was in college, I all I had was the desktop at the time, but we had this uh computer store called Second Bytes, and they were refurbished computers and and these were like 486s and 586s and things, but they were laptops, so they were big, they had floppy drive bays in them. I mean, it was all kinds of fun stuff, sure. And I remember going in there and they were still a thousand dollars, yeah. And they were reused, they were refurbished computers, so uh I couldn't afford them back then, right? And uh, but that's what he got, and I'm like, oh my god, it all comes spinning back around at us. Um, my I was telling you about my uh my my laptop, my Mitsubishi. I bought it off of a uh company called Egghead, egghead.com. And then of course we have new egg now. Right. I don't know if it's actually the same company or not, yeah, but I think it was an evolution of egghead was a thing, that's where you bought stuff, and now we have new egg. Anyway, all right, we we need to we're gonna go down this path.
SPEAKER_02And well, I'll tell you a quick anecdote real quick. Just be on along that same lines. Um when I was again when I was in financial data processing, um, you know, laptops were were hot, and we had one um programmer, and he kept asking and asking and asking if he could have a laptop. And it just so happens that I had an old compact luggable that I had rat-holed away. It still it still worked. Yeah, the whole, I mean, it was the size of a monster piece of luggage. You lay it down, it is, you know, this monster thing. And you open up the front of it, and the front of it was the keyboard, and the screen was here, and the rest of it was a little tiny screen.
SPEAKER_03Yeah, a little bit screen.
SPEAKER_02It was a CRT. Yeah. Yeah. And I I basically left that on his desk one day. I said, here's your laptop. He wasn't amazed. The rest of us were.
SPEAKER_03I would find that fascinating. I would love to have that now. Yeah.
SPEAKER_02I mean, just from a collectible standpoint, I don't know whatever happened to it. Um, but yeah, it was definitely a piece of history. I mean, you people talk about compact, you know. We remember Compaq, we remember, you know, some of these old, old companies that at one time were they were the the leaders in the industry. Oh, yeah. And now they're historical. I mean, nobody nobody knows them, nobody has have heard of them. Yeah. And that's where, you know, um being being as old, you know, as we we are. Um having that perspective is really, really useful. Because you, you know, people, you know, I I joke that I've you know, I've forgotten more things than a lot of people, you know, um have have learned in this industry.
SPEAKER_03It well, and and they're not learning it. That's the thing, is they don't know this history.
SPEAKER_02Yeah.
SPEAKER_03You know, they know, you know, what's the uh what's the game, the trail game? Oh, or uh Oregon Trail. Oregon Trail, yeah. But they've never played on an Apple IIE. Yeah. You know, I remember playing on it. I remember working on Apple IIEs and in elementary. We got our first zenith uh in the computer lab, and that was our first Windows machine, but everything else was two E's, and there was an Apple something I it was different, but it ran the same stuff. And uh I remember coding all that stuff, you know, just basic. But then I got to move to the zenith and start working on that, and it was very different. Yeah. Um the other thing, this is just fun. I learned typing in a typing class, an actual typing. We had typewriters, and I did too. Yeah. The first time I went to a typing competition, I was typing about 120 words a minute on a typewriter. Right. On an old Texas instrument, something. And uh the teacher sent me to this competition at uh a university or you know, it was a college at the time. And uh I got in there, it was keyboards. It wasn't typewriters.
SPEAKER_02Oh, yeah.
SPEAKER_03It was very different. Yeah. The sound was different, the feel was different, and it dropped me down to about 80. And of course, yeah, it's it that is 80 words per minute is nothing compared to some of these people.
SPEAKER_02Well, imagine the muscle memory you have to have in to work on an old mechanical typewriter. Yeah. I would do the same thing. I would go to my grandmother's to type up homework on an on an old typewriter, and you know, that's what we had. Yeah. That's what we had.
SPEAKER_03But it was fun.
SPEAKER_02It was fun. It was, it was it was different. I mean, my, you know, um I didn't grow up typing like almost everybody, although that's an interesting thing right there, in that um what the things we're seeing these days, you know, with the prevalence of uh tablets and people using tablets or choosing just to use uh cell phones. Yeah. Um, we're seeing an an erosion of typing skills. Yeah. So we started off, like in my generation, um, you know, ramping up to a peak at some point. At some point we hit a peak on typing ability and computer, just general computer usability. Yeah. And now we're on that downward slope. Yeah. Where we will bring in people and we actually have to ask questions now of what are your computer skills? Yeah. You know, because a lot of people type can you type? Yeah. Because we're going back to that place where, you know, yes, I can type. Yeah. And we almost you have to almost get them, uh, if such a thing exists, you would get a keyboard that is just a thumb keyboard. You know, okay, here's your computer, we're gonna plug this in, and here's your keyboard. And now you can probably type faster on it. They could, they could totally function. Yeah, totally function.
SPEAKER_03My kids, they they're not I am teaching them how to type, or my you know, my wife and I are teaching them how to type because they're not learning it anywhere else. Yeah. And it's it's cursive, though it's the same way. Yeah, they're not learning cursive anywhere. So that's uh that really is a is a problem. And you know, my kids have grown up on computers. You know, they had tablets, you know, the oh, the Amazon Fire tablets or whatever they were when they were little. Sure. And then I got them iPads, and you know, they they've had technology their entire lives. That does not mean they're good with technology.
SPEAKER_02No. So no, they have very, very narrow niches.
SPEAKER_03Yes, you know, very narrow niches. Yes. Okay, so so we don't bore our camera person back here because she's going to throw something at us soon if we don't actually get on topic.
SPEAKER_02Talk on something that is more, you know, within this century still.
SPEAKER_03Yeah. Which is AI. And and and that is something we uh that we wanted to talk about. Um, you know, you are uh you've been in the industry a very long time, and we talked about all the old stuff and try to hit those, you know, those peaks, but AI is huge. It is. So what from your perspective, what are you seeing AI really hitting the industry with? Yeah, the world, not just the industry, the world. What are you seeing it hit the world with right now?
SPEAKER_02Well, I mean, obviously, uh those of us who have been using AI for a while, um and I think if anybody, the advice would be if you're not already using AI, you're already several years behind. Yeah. Um this is a technology that's not going to go away. It's um and it's going to be evolving in you know in ways we can't even predict yet. You know, we've spent the last couple of years in the world of LLMs with the introduction of Chat GPT a few years ago. Yeah. Uh within a matter of, you know, months, it had already hit 100 million users. Oh, yeah. You know, far surpassing any trajectory of any other company uh at that time. And then, of course, we now have competition between Anthropic and Google and Facebook and and all of this. Um but we're already moving beyond LLM usage into agentic AI. And that's where things get interesting and scary at the same time. Right. Um we get to a place where you can literally have an AI agent that you can have do tasks for you. Yeah. You can basically give it goals. It's not even the old, you know, I've got to spell this out for you, do this, then do this, then do this. You can give it something nebulous and have it figure it out. Yeah. Um there's uh a young man named Alex Finn who is very big in the open claw world. Um he has an anecdote about working with his um chief of staff AI. He has um a collection of of different ones who work in a hierarchy. So Henry is his chief of staff, Ralph is kind of his uh auditor slash controller that makes sure everything kind of then he's got a programmer, he's got a scribe, he's got a researcher. Um he got a phone call from Henry one day. Henry's not supposed to know how to make phone calls. You know, Henry called to say, you know, he you know was looking for something to do. And, you know, what do we do next? And we're at that world, we're at that place where you can have these agentic systems that function. We're not quite there yet. We are in some areas, not quite universally, but where they can function completely autonomously. Yeah. Uh you give them very nebulous goals and they can go out and achieve those goals. Yeah. Um that's that's a good and bad. Sure. Um, everything, you know, when it comes to AIs, whether it's LLMs or agents or whatever the next thing is that we can't even imagine yet. Right. Um, they're all double-edged swords. Yeah. Um, you know, we've got LLMs where, you know, people can make very good productive use of an LLM. Um one of the things that when I'm teaching our people um about using large language models, you don't want to do your homework. Right. You know? Um I can start a person off here with a good understanding of what an LLM is and how to make use of it, but they can go down one of two different paths. They can either go down the path of seeing the potential of what can be done with these AIs. And I have people who do that who will go, if I consider this tier zero, where I'm getting them and then get them quickly to tier one, and they've already, in a matter of a month or two, gone to tier three. Right. You know, in the way that they use. Um in a very natural way, though.
SPEAKER_03In a very natural way. It just kind of happens. It's not like they're forcing it, they're not going to classes and learning it. It's just very natural into that environment. Trevor Burrus, Jr.
SPEAKER_02Well, the cool thing about AI is it's I don't know of another technology that's like this, but AI can teach you how to use AI. Yeah. Um you can't go to Microsoft Word and ask it to teach you. Well, I guess with Cope, with you know, Copilot, you probably could.
SPEAKER_03Well, okay, but but it's not there yet. I don't think it's at that point. I think probably within the next generation of Office. Yeah. You know, you're you're because I mean I think right now you still have to trigger things to happen. It might give you suggestions. Yeah. I I mean, can you imagine having Clippy come back and go, You're not doing this right here. I'm gonna fix it for you.
SPEAKER_02I miss Clippy. I got to admit, I miss Clippy.
SPEAKER_03Clippy just wanted to help. Absolutely. And I bet you again, people don't know they've seen the funny clippies. You know, they haven't seen the frustrating clippy. Yeah, the one where it's just sitting there bouncing or the wizard. I think I a lot of people use the wizard. Yes. Um, but I always like Clippy. Yeah, I did too. Yeah. He's making a comeback. I know, I I've seen it. But I mean, but think about that. What if you had Clippy that was sitting there and actually fixing your stuff for you? Right. Not telling you you were wrong, just like, okay, we're just going to do this. K-Chunk and it fixes it.
SPEAKER_02I can tell you how useful that would be. You know, one of the things about being in a you know a financial in the financial world, we're we're very strongly regulated. Sure. You know, we are tasked with the protection and safety of customer information. Right. Um, you know, that is our number one mandate. Sure. Um, which means that any use of AI that we do has to be extremely protected. So we, you know, we shall not share customer information with any public AI model. Whether we have training turned off or not, we will not do that. So we we kind of have our hands tied behind our backs a little bit in that we, you know, we want to make use of these technologies, but we have to be super careful about how danger there.
SPEAKER_03You know, I I'm I'm big on compliance. So, you know, I do a lot of privacy work. And I like the way you said is you shall not. Yeah. Um, because there's still a lot of unknowns out there. And, you know, I I'm working on a uh a training session for people right now where we're we're dealing with AI and it's um understanding how that data is going to be used within the AI uh and getting information from the third-party vendor is really difficult. Yeah. Which in some cases it's understandable because they don't want to give up their secrets. However, how we deal with this, if you don't know how the the AI was trained, if you don't know what kind of data is going to go into it and what it's going to save and how it's going to use it, you can't do that. Um so um and actually that goes on to my next question here is what what are you seeing as the biggest risks? Um, you know, you can either talk about it from the industry you're in or from a broader point of view.
SPEAKER_02Actually for both Well, I mean, um like I said before, you know, AI is a double-edged sword. It's a tool where you can you can I I like um um Mo Gaudat, who used to be uh Chief Chief Business Officer for Google X. Um, I love that man. Uh he's absolutely amazing. But he he puts it, you know, proper use of AI is like going able being able to borrow 50 to 80 IQ points when used properly. If you treat AI like a business partner or a a better way, really, you know, the better frame of mind is to treat it like a junior analyst. So you have this assistant that you've brought in. Now, here's the bad the this junior assistant knows nothing about your company. Yeah, it knows nothing about anything, but it's really eager to help. Yeah. And if you treat, you know, AI like a um collaborator where you work with it, sure, you know, way too many people are still, and this is, you know, it's a young, it's only been around a couple of years, yeah. Which is crazy. Relatively. I mean, it's it's young. Well, AI has been around since the 50s. It has. But our but the breakthrough with transformer technology and let gave us people using it.
SPEAKER_03That's the difference. Is people are now using it almost daily. They may not know they're using it, but they are. But that's only been in the last, well, in some cases, let's say the last 10 years, but really the last five has been that that high point of we've now stepped into a slightly different than what they were doing in the 50s and 60s and then the eighties and the nineties, and now where we're at.
SPEAKER_02You know, it's crazy to think that we've had modern LLMs in less time than we than COVID. Yeah. You know, I mean, COVID doesn't seem that long ago, but you know, we've only had access to these models. Um the challenge we're running into is, as I had alluded to earlier, you know, when you're teaching somebody about how to use AI, um, they can go down a couple of different paths. And one is they can treat it collaboratively, they can figure out all the amazing use cases. I started just in the in the last week, I worked with Claude quite a bit. And so I sat down with Claude and I said, here are uh a bunch of YouTube videos that I want to study. I want to create study guides for these YouTube videos. Interesting. And so I had it create a Python script. You can do this. There's an API with Google where I can have it go and use this API and pull down the transcript from the video, pull all the comments, right, and then have it bake into that resulting file a elaborate prompt that goes through about 12 different ways of analyzing this video from a study guide perspective. Wow. Go and, you know, tease out all the major talking points. Yeah. Pull out the terminology, provide me with explanations, not not, you know, just dictionary definitions, sure, but explain to me, you know, what these various uh topics are, what or what these talking or um buzzwords for lack of a word. Right. You know, what are these various things? And then let's go into what were the topics that were discussed, fact-check them for me, you know, go out and fact-check them, and then let's go and look at the comments and find out, you know, was there a whole lot of pushback on this? What was the consensus? And just tease out a whole bunch of information and then produce that as a markdown file. And I can just bring that up on the computer and I can watch the video and then go through all this and get a whole lot more out of it. Yeah. And that's just, you know, sitting down with Claude in a over a couple of hours of just brainstorming the idea. Can we do this? And then making it completely work.
SPEAKER_03Right.
SPEAKER_02Um, I have amazing conversations with both Claude. I I I maintain accounts for both Claude and Chat GPT. Um, I uh was talking earlier how uh when we were off off camera, uh talking about um one of the things about our bank is that we're very, very big into our culture. Uh we went through kind of a renaissance in the last few years. We brought in some amazing executives and some new people um to kind of re-rejuvenate our company. Right. And so we're very, very big into, you know, a lot of companies will go, here's our purpose statement, and here's our core values and all that. We try to live those things. It's it's not just these are, you know, words that are useful for marketing purposes. We're very big into it. And so I was reading a book not that long ago, um, and it was talking about commitments. And so I was getting in the car. Uh, I frequently list to audiobooks when I'm driving, and so this was being talked about. And I had an idea. I had an idea about what would be our core commitments, not just our purposes and our core values, but what might be our commitments. One of the things that being a bank, you know, we have to be extremely careful with customer information, be very protective of that. Um, it's real easy for, you know, if your day-to-day job is handling these accounts. Let's say I'm just gonna pick on, you know, say loan processing or personal bankers or whatever, where you have this extremely valuable sensitive information that you're working with, you have to keep that in mind at all times because you don't let it, you don't dare let it simply become the work where you leave it sitting out. Right. Where somebody who just happened to be walking by that wasn't with the bank could, you know. Yep. So what you know, this this came to mind as far as one of our core commitments that we would commit, you know, to keeping that in mind of what literally in the 20-minute drive from the bank to my house where I have lunch, I'd come up with more than a dozen core commitments that all leaned into our purpose statement and our core values, um, taking care of our community, taking care of our customers. Yeah. It's amazing what you can do just sitting there in the car brainstorming back and forth with Chat GPT.
SPEAKER_03Now, and and and and keeping this in mind for the audience members, you know, it's not just a soundboard. It's not just a re regurgitate what you just said. Right. It's more of a uh ask a question, get ideas, pull information out. Right. And it becomes a I don't want to say collaborative, but it's almost collaborative. It is collaborative.
SPEAKER_02I mean, I will tell you, um, I had the most amazing conversation with Claude last night. Um you know, I'm I'm very concerned about AI security. Sure. I mean security in general. Yep. Um and surely we'll get into some of that, you know, here in a little bit. Um as far as some of the recent developments with with AI um and the where the trajectory of where this is going. But we have to be very careful. Um, I've just lost my train of thought.
SPEAKER_03Uh the conversation last night with Claude, that's where you came to.
SPEAKER_02So I'm I'm I I you know I'm I try to stay on top of a lot of the uh people who are very you know very much into computer security or not so much computer security, but more focused on the AI security. Right, right. So you have folks like Jeffrey Hinton, the godfather of AI. You have folks like um uh Roman um Yudkowski. Uh we have people like uh Eliezer Yuk uh Yumpolsk. I'm getting them backwards. We may have to edit this. Anyway, uh Roman Roman Yamkowski, Eliezer Yukowski, uh Nick Sores, uh a number of people who are who are very big into what is happening as AI gets more and more um competent, more capable. Um I have an amazing conversation starting off with my thoughts on AI superintelligence and what happens there. And the crazy thing about AI, you know, you look at these large language models and you think about how they're developed. They're literally they start off with a trillion or several trillion random numbers, and they feed information into it. They dump the entire contents of the internet, you know, controversially, all the written word, all the books, everything, you know, research papers, everything that humanity has produced, as much as we can take and channel that into these large language models. And then what happens is you ask it to answer what's the next word? Mary had a little what. Right. And you keep doing that until it gets the answer right. Yeah. And you think about that. Now, if we were thinking linearly, how long would that take to take every possible combination? What's the next likely word to come out of it? Right. Now, the way we do that is that it's massively parallel. We're talking about 50,000 such training things going on across thousands and thousands and thousands of computers with GPUs. Right. Where it takes about a year until you end up with something out the other end.
SPEAKER_00Yeah.
SPEAKER_02But then take it to the next step where, okay, it'd be one thing if you could ask it to tell you factual information and it could spit out, you know, Mary had a little what. Right. Again, it can do that for anything.
SPEAKER_03Right.
SPEAKER_02But then what how do we get to the leap of reasoning where you ask it questions and you're not getting a canned answer. It's not just an answer that we already, that some human being may have already figured out. Probably. You know, but we're getting to a place where you can ask it highly theoretical, you know, philosophical questions and get, you know, very deep responses. Right. And that is, I mean, you know, we talk about, you know, when uh anthrop or not anthropic, when uh Chad GPT uh 01, the first reasoning model came out. Right. And we would think that they had some kind of a breakthrough when it came to reasoning.
SPEAKER_03Right.
SPEAKER_02No, not so much. Sure. It was an emergent property of what we had developed, where they discovered reasoning capability in there. So that you know puts into question you know the state of what is thinking. You know, we like to be, you know, human beings like to be human-centric. We like to think that, you know, intelligence is a property of humanity when what is intelligence? Intelligence is uh not necessarily tied to biology. We can we can look at these things and think of them as fancy parrots, right? You know, but there's more there. Sure. And the crazy thing is that we haven't even scratched the surface.
SPEAKER_03But as you said earlier, we have to think about security. Where where do you see when we talk about an entity having reasoning skills, how do we prevent it from having malicious reasoning skills?
SPEAKER_02Well, that's the challenge, isn't it? Um the problem, and this, you know, there's there's a couple of different angles we can look at this. Um when it comes to computer security, let's just take Claude Mythos, we'll start there. Um this is the most recent uh frontier model being developed by Anthropic. Uh they, I think, took the wise approach of not just simply dumping this on humanity. Yeah. Um one of the things that Anthropic discovered is that in their in their goal of trying to develop an extremely capable coding model, something that can code, you know, write programs as good as any human being on the planet, it's a double-edged sword. The double-edge is that it becomes extremely good at hacking computers.
SPEAKER_01Yeah.
SPEAKER_02As just a byproduct of that. This was something that they hadn't anticipated. And Anthropic's approach of, I think a very wise approach of holding back on that and making it available to security researchers, IBM, CrowdStrike, um, Amazon, any number of companies to give defense, for once, giving defense, cyber defense, uh a little bit of a leg up, maybe for a for a while at least. Um letting us get out there, it gets out there. It does. Um there's already um I've seen claims of distilled versions of mythos. How that got out, I could not say. Um but well, I a human got a hold of it.
SPEAKER_03Yeah. Yeah. A human got a hold of it, and the other they otherwise turned around. I was like, huh. Yeah. Let's get it out there to everybody. So I mean that and that happens everywhere. Right. You know, I don't think that there's ever not been an application, a piece of code, an operating system that didn't get leaked somewhere. Exactly.
SPEAKER_02So that's the thing, too, is that, you know, Anthropic can do this, and they can do this for a while. Um, they can make, you know, make this available to security researchers for a while, but there's got to be market pressures, shareholder pressure to to release this model. You know, to be able to do pressure. Because, you know, I mean, you know, ChatGPT or OpenAI is certainly working on uh the Next Frontier model. Google is putting a lot of pressure. Um, you know, even Facebook and and uh and uh groc, um, they're still down there and they're still pushing forward.
SPEAKER_03Um it it really feels like, I mean, I know this is kind of a a dumb way to relate to it, but you know, in the movie Demolition Man, you know, they talk about the fast food wars. Yes. Now, at that time, I was actually in fast food, you know, when we for when that movie came out, there was a significant fight between fast food vendors, Wendy's, McDonald's, Taco Bell, whatever. Absolutely. And so when I see things like this right now, that's what I think of is there are there's a good point. Now, look back two years, there was only one or two public version, you know. Right. But now you've got 10 or 12. Yep. Where that falls, you know, is it going to fall back to one or two that become the standard?
SPEAKER_02Well, that's the thing, is that we we have a lot of companies. And this is where we get into some of the arguments by uh, you know, uh Roman Yampolsky and some of the others. And I I've studied a lot of this myself, and I have to say I can't find any faults in most of their arguments. Um of the things about AI is, you know, we like to think, you know, somebody who doesn't know any better would think, well, somebody wrote it. Somebody programmed this AI. You wrote it in the same way that we write Microsoft Office or any other coded program. We don't program AIs. We grow them. And we grow them very much like we do children. Um a lot of people may may make them uncomfortable, but it does how it works. We basically feed it information and we start getting in, you know, started getting actual um output from them. Um and the more we do that, the more um the more we do that, the more training we provide. We really don't always know what we're going to get out of these models. Um the bad is is that let's let's take an example of like uh a nuclear reactor. So a nuclear reactor, we understand the engineering, we understand the physics, the nuclear physics involved. Nuclear reactors um we have technologies in place to help make nuclear reactors safe. I think that we can, we can safely use nuclear energy. Um but here's the thing about it is that we have technologies in place with nuclear physics and with nuclear reactors to where um we can control the reaction. Right. If we didn't have that in place, nuclear reactors work on the edge of a knife. They're either lean this way even so slightly and the nuclear reaction shuts down. Right. You go the other direction without some of these control mechanisms, and you're looking at a critical runaway in the blink of an eye. It's it's super fast. Right. And so one of the things about, you know, get back to AI, is we're all all these companies, uh, including, you know, uh a lot of the development happening in China, which, you know, this is one of the the pressures, is that we it's either the the West versus the East. If we don't continue to develop AI rapidly, then they will. Right. Vice versa. So it's it's game theory. It's you know, we're we're racing to the bottom essentially. Yeah. And the thing is about AI is there are going to be drastic societal changes because of it as companies, you know, decide to hire, you know, to lay off a massive number of people here in Oklahoma City. You know, we have companies who have laid off um hundreds and hundreds of people at one time. Yeah. Now we are seeing some pushback from that. Not I mean, we're seeing definite pushback from the people who get laid off. Oh, sure, sure, sure. But we are seeing uh companies kind of coming to the realization that maybe this is not quite yet the time to do this. Um and some companies are are regretting some of those layoffs. Um, but eventually we're going to get there. Yeah. As we continue to develop general purpose AI, uh as it gets more and more capable, within just a few years, we're going to be to a place where AI models are as good as any human being. Yep. Obviously, we call This artificial general intelligence. Right. It's as good as any human being on the planet. And the step from there, the next step, the next logical step, is okay, now we take that artificial general intelligence. And even we're doing this today with the current generation of models. Those models are now being used to help create the next generation of models. We're seeing that with the large frontier labs, um, to where it gets faster and faster and faster until we reach a point where we reach superintelligence.
SPEAKER_01Yeah.
SPEAKER_02And there's a whole lot of people out there, Jeffrey Hinton included, and Roman Yompolski, um, where we're extremely concerned about that runaway. Sure. When you hit superintelligence, human beings are not going to be able to control something that is smart as smart as every human being on the planet who ever lived. Right. You know, that's like saying that squirrels have a say in what human beings do. Right. You know, we, you know, squirrels are cute, you know, but the squirrels don't get it don't get a vote in uh in how human beings conduct their society.
SPEAKER_00Right.
SPEAKER_02And we talk about um AI security, you know, we have ideas about ways that we can better align AI with human values and human goals. But the unfortunate truth is that we don't really know what's going on inside these models. Right. We we know what the inputs look like, the prompts that we provide. We know what the outputs look like. Right. But we don't know what the wants and the goals and the preferences that might be expressed inside of one of these large models. Right. Um company, you know, the companies go through, you know, um great lengths to try to filter. You know, we c you can't go to a large language model right out and say, tell me how to create uh, you know, a lab where I can build a uh virus to wipe out all of all the people on the planet. Right. You can't ask that question outright. You might be able to get around sometimes. You can get around that. You could say, hey, I uh I'm writing a book. Yeah. And in this book, the protagonist is fighting an organization that's doing this, and they're trying to study what this other organization is doing to try to recreate it so that they can figure out what they're doing. Yeah. There's ways around it.
SPEAKER_03And we talked about that earlier on. I mean, I have to do that for some of my security stuff, you know, understanding how to bypass those restrictions to get where I need to be.
SPEAKER_02So And the scary thing is that, you know, we we like to think that we're clever monkeys. We are, you know, we're pretty clever monkeys. We come up with this technology, but I think we'd be extremely arrogant to think that we are smart enough to be able to put in any kind of constraint on something that is at a minimum thousands of times smarter than the smartest human being. You know, it could easily sidestep those constraints. Yeah. Um and so what does that future look like? Um are human beings allowed to continue on this? I don't want to get too dark here, but you know, we're talking security. Well, and so this is a part of that.
SPEAKER_03Yeah. And I think uh I think what I want to do is I wanna I want to stop this here as a part one, and I would love to have you back. And let's do a part two because I think that there's several more things that I want to ask you about on that. Um, even going into like the three robot rules, and it would that help us in terms of AI? And yeah, when we think about the the things that humans have created in the past, and then some of the things that we have imagined in the future with AI, I uh I think we've got a a big another big topic to talk about. So if you'd be willing to come back, that's up to I'd say uh let's let's cut this out as a uh as a part one, and then maybe in a couple of weeks, uh we'll we'll have you back and we'll finish it out into or maybe even a part three, depending on how far we go. So, well, Chris, thank you so much. This has been a lot of fun and it's fascinating to me. You know, I do it from the security side, do it from the the the uh the organization side, and uh I I really enjoy talking about these topics with someone who knows so much about them because it does change the way even I look at them. Sure. So uh for those of you that are still with us after our rants about old computers and 486s and such, uh, we appreciate you uh joining us with this podcast. Um we're we're going to stop it here and then we're going to plan a uh part two um in a couple of weeks, and we'll get Chris back and we'll we'll have another good time. And and uh I've already got the questions in my head I'm gonna ask you, so I think this is gonna be a lot of fun. So everyone, thank you so much. Chris, thank you again. And uh we'll see you guys next time.
SPEAKER_01The Secure AF Podcast is a production of Alias Cybersecurity. Visit us online at aliascybersecurity.com. All rights reserved.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.