Threat Intelligence: Recorded Future • Christopher Ahlberg

Hunting Shadows: Cybersecurity and the Collapse of Trust

In this episode of unNatural Selection, we dive into the high-stakes domain of cyber security with Christopher Ahlberg, CEO of Recorded Future and a pioneer in threat intelligence. From state-sponsored attacks to disinformation campaigns, Ahlberg unpacks the rapidly evolving landscape of digital threats and the technologies being deployed to counter them. We explore the escalating AI arms race, the shifting dynamics of global power, and how the erosion of trust—between people, systems, and institutions—is reshaping our digital and societal foundations. This conversation is a sobering, insightful look at the hidden battles shaping the future of security, privacy, and democracy.

  • (Auto-generated by Spotify. Errors may exist.)

    Doctor Christopher Ahlberg is the Co founder and CEO of Recorded Future, the world's largest threat intelligence company acquired by MasterCard in 2024.

    Since Co founding the company in 2009, he has played a key role in shaping its growth as a global leader in intelligence driven security.

    1:29

    Previously he Co founded Spotfire, later acquired by Chipco.

    He also serves as chairman of Holt International Business School.

    Christopher, I'm so happy to have you here and thank you for being on the Natural Selection.

    1:39

    Speaker 1

    Thank you for having me.

    It's great to see you, Nick.

    1:42

    Speaker 2

    I always start with the same level setting question for the sake of the listeners so they understand what field we're actually talking about from your perspective.

    1:49

    What is threat intelligence?

    So, in the simplest way possible, what human or societal need do you address and what is your role in solving it?

    1:55

    Speaker 1

    All right, No, no, think, think about it this way.

    So over the last 25 years, I'll use that time frame, the world of sort of slowly migrated onto the Internet.

    Everything that we do in everyday sort of life is, is sort of happening on the Internet.

    2:12

    And the the Internet has slowly become a reflection of the world.

    That's sort of step one, Step 2, The next 25 years, and uncomfortably so already starting to happen.

    The world is going to become a reflection of the Internet, whether it's relations between people or business relationships or geopolitical relationships.

    2:37

    And all the transactions that feed of all of this stuff will sort of emerge first on the Internet and then migrate onto the physical world.

    And in that we end up with a world where cybersecurity, it's not really something about just bits and bytes cybersecurity, but it actually really is about where where threats are going to converge, whether they're sort of Internet based threats, cyber threats, if you want physical threats.

    3:01

    We'll probably come back and talk to your politics here.

    The sort of unfortunately, we live in a world where there's a set of wars unfolding as we speak.

    And then the sort of disinformation threats, misinformation threats, whatever word you, you prefer and the convergence, all of that that can, can and is just to, to some extent already causing significant societal impact.

    3:27

    And, and we believe that intelligence that can actually extract sort of the, the, the signal, if you want, from this somewhat chaotic world that we're watching can be imperative at keeping the West safe, at keeping companies safe, keeping organizations safe.

    3:47

    And, and there's a lot of opportunity in all of that.

    And and I think that's what why we're meeting here today to talk.

    3:53

    Speaker 2

    For the sake of those unfamiliar, what exactly is threat intelligence and why does it matter today more than ever?

    You mentioned the geopolitical situation.

    Obviously with AI coming on board, it's becoming harder and harder to tell truth from fiction.

    4:10

    And so from with all that, what?

    How do you define threat intelligence?

    4:16

    Speaker 1

    So think about it this way.

    The, the, so as I mentioned, the world is sort of, and the Internet is sort of, I don't know, converging, becoming one.

    There's like we used to say that the, the Internet was a digital twin of the world.

    4:33

    Unfortunately, I think it's sort of like it's It's one and the same.

    4:37

    Speaker 2

    It's very matrix like.

    4:38

    Speaker 1

    Yeah, I know.

    And, and, and it's sort of there's a lot of good from that, a lot of cool things we've been able to do, but there's some scary things too.

    The so.

    So then you think about what that actually means.

    There's a lot of goodness, but it also means that bad guys take advantage of this.

    4:54

    Very early on as things got wired up, spy agencies of various sorts, both among our adversaries but also ourselves, for sure figured out that you could use electronics means to steal information.

    5:10

    What started originally with sort of listening into phone calls and telegraphing whatever is sort of obviously become a pretty epic opportunity to steal information from the Internet.

    The other side of the coin, which is actually the much bigger threat is, is stealing money of various sorts.

    5:28

    And, and, and that emerged from stealing outright sort of credit cards and and hard cash type stuff into actually ransom wearing stuff.

    Because as infrastructure, Internet based infrastructure became critical, the bad guys figured out that hey, I'll just lock this stuff up and I'll steal the information of it and threaten to release it.

    5:48

    And suddenly I have good impetus to or good, good grounds for action to ransoming somebody.

    So, so again, what I'm saying here is that there is an enormous opportunity to cause harm, whether it's to steal secrets or steal money.

    So now what intelligence tries to be helpful about here is can be as simple as saying I work in Industry X and I'm a company that is of, you know, particular size in a particular geography.

    6:14

    These things are happening to my neighbors, my neighbors.

    It's not necessarily on the Internet who's physically my neighbor, my neighbor can actually be in in New Zealand.

    But if something is happening to my to my neighbors, I probably, you know, need to improve my security around something, fix my firewall, improve my endpoints, what have you, that sort of stuff that's sort of at the tactical level that can go even more tactical.

    6:38

    These are the IP addresses and domains and the sort of the technical data that I see being used to attack companies like myself.

    Maybe I want to block that stuff.

    And then the more strategic level where, you know, so this weekend maybe we'll talk more about this.

    6:54

    There's obviously been an enormous amount of fighting between Israel and Iran.

    We could probably talk for hours about that.

    And there are cyber implications of this.

    So right now pretty much every company of any sort around the world is asking what is the cyber implications of this conflict between Israel and, and, and Iran and, and what, if any, but people are asking those sort of more strategic questions, what are the 2nd order implications of, of such a cyber conflict?

    7:24

    So the intelligence here can be sort of from tactical again, what IP addresses and domains should I block all the way up to pretty strategic questions and and there's a lot of work to be done to be helpful in that.

    7:37

    How RecordedFocus gathers and processes data

    In order to build up this intelligence, you're obviously mining through like vast amounts of data.

    Where does recorded feature gather and process the data from and and what role does AI or machine learning play in transforming that into actionable intelligence?

    7:53

    Speaker 1

    So, so first of all, yeah, you need to get all the data.

    The, the, we started off originally we got funded by Incutel at the time, CIA, now probably the broader intelligence communities, venture capital arm and then Google who sort of invested in the company to, to sort of build out the, the core engine around processing what was written on the Internet.

    8:19

    Then I like to say that every year we've sort of drilled our way further and further into the Internet to pick up all kinds of stuff, good signals, bad signals, what have you.

    And so we'll collect everything from sort of machine written data, human written data, imagery text, what have you crossed a whole set of modalities.

    8:39

    And then to your point, we use a lot of techniques to extract signal out of this.

    And this can be as simple as a piece of text.

    Xi Jinping is traveling to Moscow tomorrow, or a hacker is asking, does anybody have access to any computing infrastructure at a particular university?

    8:58

    I have an exploit I want to try on their network, something that might be set on a forum or something like that, all the way to like very geeky sort of machine level data.

    And we take this data and we try to process this and we turn this into our intelligence graph that we think about this as the comparison to the Google knowledge graph.

    9:18

    And it's the Lord is intelligence graph in the world that connects everything from sort of bad guys to the methods and tools they use to all the way over to the bad, the good guys.

    And, and then puts this in the context of geography and geopolitics and all these sort of things.

    9:34

    And then we built in a series of intelligence products on top of that.

    AI is incredibly important in this.

    When we got started, it was a natural, natural language processing.

    Then there was a lot of machine learning type of techniques to look for basically call it trend analysis, anomaly detection, that sort of stuff.

    9:56

    And now, you know, so, so in intelligence, we tend to have for decades talked about what's called the intelligence cycle.

    So think about that as you're planning.

    What sort of information do I need to go collect?

    Kind of implying for your original question, what sort of information am I going to process and how we're going to process it as it comes through my door?

    10:16

    It can be lots of different kinds of data.

    You can imagine we get a few things through our door.

    Burp.

    Then you want, once you process it, you want to sort of start extracting signal and so on.

    As I was saying, burp.

    Now the beauty over the last couple of years, and this is probably where the word AI kicks in.

    10:33

    The AI cannot just be used to process, but it can be damn used to produce intelligence.

    So if I look up, if I go into record future and say get me all the data on on I don't know, IRGC locations in Tehran related to, you know, bombings by Israel and get it only from Farsi news sources or Farsi get it only from Telegram accounts in Farsi in Tehran.

    11:03

    And now broom, you know, the AI will produce human written sort of human like, not human written human like reports.

    And and we used to do that two years ago when this popped out sort of short form mixed up with imagery and everything and people are amazed.

    11:19

    Now we produce damn whole full length, you know, things on a daily basis, hourly basis that that just blows people away.

    Is this written by a machine?

    It it actually changes.

    So that whole intelligence cycle that that then goes from this collecting process into the writing dissemination part of the Intel cycle.

    11:39

    Can it really is disrupted by AI here.

    And we we like to joke around here that Sam Altman did all everything he did just for us.

    Pretty remarkable.

    Maybe that's something?

    11:51

    Speaker 2

    Yeah, it really is remarkable, you know, and when it comes to AIA, lot of people talk about putting guardrails around it and making sure that, you know, we protect humanity from it and we can and can't do certain things.

    And from my perspective, it's it's the genies out of the bottle.

    12:07

    It's, you know, it it's more of an arms race at this point.

    And it's really, you know, because you put guardrails around what we can do in the US, your adversaries aren't going to follow those those guidelines, right.

    So I really feel like it's kind of a race to see who has the most powerful AI and the biggest protection around some of the threats along with that.

    12:25

    And you know, to your point earlier on with life or the Internet emulating life and now life emulating the Internet, it's kind of scary to think about all of this misinformation that's out there.

    Ideal it well that in genomics all the time.

    But we see it in geopolitics, we see it in local politics, we see it on the Internet.

    12:46

    And and so something like this is obviously as critical as ever and will only continue being ever more critical for us to just be able to build trust in the things that we see, the things that we read, the things that we hear.

    Given that it is an arms race and you're using very powerful technologies, but so are adversaries using AI to generate stuff.

    13:08

    Building Trust in AI

    And we just talked about how you collect the data.

    How do you build trust or how do you build trust in what you're building at recorded future?

    To know that the data you're processing and making decisions on is actually real versus manipulated or fabricated by bad.

    13:24

    Speaker 1

    Actors using AI.

    So you just, you just asked me 7 questions just to be clear here.

    So I'll try to sort of unpack that.

    You have to keep me honest on whether I responded to to any of those so.

    13:35

    Trust in data

    Focus on the one about the trust and data.

    13:38

    Speaker 1

    Trust in data.

    I'd love to talk about your other point about like what, what are we doing versus what the adversary is doing?

    And then the genie out of the bottle.

    I also think has is very important, but trust in data for sure, because to be honest, that is true regardless of, of AI or not.

    So, so first of all, you, you, you're a scientist and the beauty of its science is that you, you measure something, you know, you measure something with a piece of, you know, I don't know, a yardstick or some sort or scale or whatever.

    14:08

    Now, maybe, and the good news is the scale may be no perfect and in fact no scale is perfect sort of thing.

    But, but we can sort of even measure the error on the scale.

    We can know when we buy a scale that this is the amount of of of sort of, I don't know what the right word is, but you know how good, how good it is or not.

    14:27

    Speaker 2

    Precision.

    14:28

    Speaker 1

    And precision.

    Precision.

    So that's number one.

    Number two then is, and this is important.

    It's very unlikely that somebody's going to fiddle with our measurements.

    It's like, if nothing else, we've we've learned in science over thousands of years, maybe hundreds to run controlled experiments where we we measure multiple times.

    14:49

    We do all kinds of different things and we run one set of things with one scale and then we run another scale and we do all kinds of different things to to figure out that this when you deal with, call it intelligence and it's not just true in sort of cyber stuff.

    This could be, you can imagine the so Iran tries to there's a lot of talk right now, but did they run away with the sort of the Ford of nuclear site?

    15:16

    Did they run away with the, the, the, the nuclear materials, blah, blah, blah.

    You can imagine the amount of sort of measurement type stuff involved in this here.

    So now suddenly it's not just about the measurement, but it's like, how much did the adversary actually fiddle with my measurement?

    15:32

    And, and there are very few fields where in in science in the world where somebody actually will fiddle with the measurements.

    I like to say it was actually not my comment.

    It was a great guy, Dan Carr, I guess, who originally made this comment about that.

    It was sort of cyber only.

    15:49

    I sort of made the point that it was also, you see it in financial markets where people do spoof trades and things like this where high quantity, high frequency traders will set up a issue, fake trades to drive a price up or down before they come in and do their real trades sort of thing.

    16:05

    So think about those are probably the two of the places in the world where people will deliberately mess with you.

    So now in that, so, so that's reality.

    That's that's just reality.

    And and you know, if you have a satellite in the sky, you can damn know that the, the, the to monitor something.

    16:25

    Your adversary will figure out that you have a satellite in the sky.

    They'll figure out that if this is a stationary satellite or one that shows up every 12 hours, given the deal rotation, whatever.

    If so, they're going to put out the masking Nets over.

    But it's coming and all of this sort of stuff is what we're dealing with.

    16:43

    So now from a sourcing point of view, now this here, here is where it gets tricky.

    16:47

    Sources

    Everybody wants to have the simple answer.

    There's some good sources and they're bad sources.

    You know, if you're in, in Cambridge here and the People's Republic of Cambridge is very simple.

    New York Times is a great source.

    Fox News is a bad source.

    You know, like if you're in, I don't know, in Kentucky, Fox News is a great source and New York Times is a bad source, you know, sort of thing.

    17:09

    And both comments are equally fucked up and dumb, you know, like they're, they're just plain dumb in Cambridge.

    The guys in Kentucky are dumb when they they're this simplistic about things.

    So this is not how the world world works.

    And it doesn't help if you start unpacking.

    17:25

    We thought, oh, it's great.

    Let's figure out what journalists are really good or what journalists are good and what subject matters and develop this massive ontology of all of that.

    And I don't think it's you.

    You actually have to unpack it down to like, you know, this particular journalist or this particular source, this particular, whatever not may be human that we actually collect data from this particular process on this particular subject matter.

    17:49

    So frankly, you just need to have the data and there's no simple answer of good and bad.

    That's sort of in the eye of the beholder.

    So our, our client who might be in the US government is going to have a different view of what's good and bad than the Singaporean government than what a company in Saudi is going to have.

    18:05

    So, so the, So what we figured out would in the future is that we collect all, quote UN quote all the data, we organize it and we make it highly queriable so that you can set it up to what matters to you.

    So from your perspective, whether it's at this sort of strategic level that I talked about or the tactical level, you can create the vernacular that allows you to understand what's good for you.

    18:30

    And I think that's sort of that that's that's sort of being a very important insight here to sort of avoid the simplicity of this New York Times versus Fox that and it drives me bananas here.

    You know, that that's sort of because we live in a society where, where is this so dumb?

    18:46

    Given that we're, I'm sitting in Somerville, really want to send a signal to the people around here with with that the, the and the other side is equally dumb, by the way.

    So, so, so the so, so, so we have that going on.

    19:02

    So the creating this sort of opportunity to create a vernacular given what, what's important.

    And that's where, you know, could we imagine an AI that actually sorts out more of this?

    You know, I, I so here, you know, after the bombings over the weekend, one of the most powerful sort of things to do is to sort of, you know, go into record future AI and say, give me the Iranian perspective of what happened over the weekend.

    19:28

    Recorded Future AI in Mozambique

    You know, give me the Iranian official news, Fars news, press, TV, whatever the sort of give me the official Iranian sort of news versus what Iranian Farsi language social media say.

    Tell me where they're diverging.

    19:45

    Now I'm asking real questions that actually allows me to get to real insight rather than this stupid question of like good, bad sourcing sort of thing.

    And and and that's yeah, I think what somebody who's a real intelligence professional would say is that that's how it has always worked, that you sort of need to understand what sources are good for what subject matters.

    20:05

    And at some level the Internet hasn't really changed that.

    It's just the the power that you can actually sit at your desk in in Somerville and say, I want to get the perspective what's going on in Mozambique right now and get me the difference from what government sources versus.

    20:21

    Local guys are saying that's pretty powerful.

    20:24

    Speaker 2

    Yeah, no, it's incredibly powerful.

    Are you allowed to share any of real world scenarios where recorded future intercepted or mitigated some kind of an attack?

    20:35

    Speaker 1

    So first of all, nobody tells us what to do or not like the, the we, we don't take direction of anybody at some level.

    We do have, you know, we, that's sort of important to make the point that we're, we're not obliged to any government entity.

    We're not obliged to anybody.

    20:51

    Now we have a lot of customers where we have license agreements with.

    We're obviously we're not going to spill their secrets.

    That said, there's been a whole, you know, like just a whole bunch of things.

    Look at what we wrote last week here.

    If you go look at New York Times to to see the research that was published with recorded future there, Julian Roberts, I think his name is wrote a great article about the, the what we've done work on how China, Chinese PLA, the Chinese Liberation Army is, is people is, is using AI.

    21:23

    You know, check that out.

    And and seeing like just uncomfortably for them.

    I think they hated that work showing how they're getting ready to use AI from the Chinese side to do these sort of things.

    So in terms of stopping actual attacks, you know, there's been just plethora of sort of things over the years where, you know, sometimes again, at the very tactical level where you sort of seize an adversary spinning up quote, UN quote, attack infrastructure sounds fancy, but you know, spoofing domains, phishing domains, you know, all kinds of different stuff that somebody's picking up that that then, you know, defender can go take down sort of thing.

    21:59

    So it can go from this sort of strategic, what is our long term adversary?

    The Chinese Pillai and CPP, what are they CCP?

    What are they are up to versus, you know, sort of what is somebody tactically doing to hack Nick, Nick and Cena tomorrow type thing.

    22:15

    And we do that every day.

    It's like literally every day that's happening.

    22:20

    Speaker 2

    So, so you mentioned, you know, obviously you have all these cons.

    22:22

    The Cons of AI

    So number one, I guess MasterCard buying you, you're still allowed to interact with different industries, different customers.

    You're not wholly focused on MasterCard activity.

    22:32

    Speaker 1

    This is a great question.

    So yes, no, you know, they paid a decent amount of money, 2.65 billion.

    Thank you very much.

    So, and I like my joke is being like, because we've gotten that question from many as you can imagine.

    And I like to say that we're good, but we're not $2.65 billion good.

    22:49

    If if they wanted to use us for just defending themselves, I think they could have done a nice little license agreement that would have slightly cheaper than 2.65 billion.

    Now the following, it's still a good question.

    So I'm not making fun of your question. 2nd element would have been would, would they tell us go focus only on financial services and retail because that's their two big vertical or there are two big verticals.

    23:13

    Not even that.

    They want us to sort of go as broad as we can and, and keep building a business here.

    They're super interested in the government relationships.

    They want to have MasterCard be a key player on the government side writ large.

    They sort of you could, you could say to say, and I'll say this in a good way, that they trade and trust that MasterCard works.

    23:36

    It's really is trust.

    And the same thing for the other credit card companies.

    If you don't trust it, throw it away.

    It's trust is incredibly important.

    And that's why cyber hygiene trust in in in that my transaction is going to be safe is incredibly important.

    23:52

    So hence the government relationships and being part of building trust is very important to them.

    And then frankly, you know, whereas you might have thought historically about sort of what I'm talking about being sort of financial services in retail, you know, like, OK, if you're a salt mining company, maybe you're not, you know, trading salt on the Internet, but it's pretty much everything has this element to it these days.

    24:17

    So no, they're, they want us to be as ambitious as possible.

    And they already had some assets in cyber and we're building on sort of building as clever sort of integrations between those as we ever can and take it from there.

    24:29

    The future of cyber security

    It sounds it's like you you focus much more on the enterprise Oregon governments, right?

    And you know, the average person today is bombarded by digital threats that we talked a little while ago.

    There are scams.

    I get constant, you know, texts about people asking me about the house I'm trying to sell.

    24:46

    It's just it's just it's not ending it.

    Sometimes it actually takes a little bit of analysis to figure out if this is real or not even, you know, when you're like fairly astute to this stuff, the average person probably gets hit and a lot of these people go after, you know, the elderly who have a tougher time in figuring this out.

    25:06

    Now, I think industries do protect you from some level, like the the Mastercards and credit cards, you know, they, they have theft protection, you know, so there are agencies out there doing something, but it's not all of it.

    And I think consumers are still at increasing threats.

    25:22

    I'm getting more and more of this stuff is there?

    Where does this end?

    I mean, how does this develop overtime?

    Especially like you said before, which is terrifying, the idea that you know reality is going to morph more into what you know the Internet is and they're going to become one in the same almost.

    25:39

    Speaker 1

    It's gonna be great world.

    25:40

    Speaker 2

    Yeah, I mean, I can be today in, in Boston and tomorrow I could be in, in, in Bali.

    But the, you know, the question though, is that it's, it seems like the cyber world, the virtual world is becoming more dangerous and more ambiguous and tough to tell what's real and what's not and what you can trust.

    25:55

    Trust and the Internet

    I I think that's at the core.

    You mentioned trust before.

    25:58

    Speaker 1

    Trust is so important.

    No, it's all about trust.

    26:00

    Speaker 2

    Is, I mean, if you think about COVID it, it eroded a little bit of the trust that we had in society, you know, because now you can't trust your neighbor.

    You know, you're going to get me sick.

    And so we're dealing constant with this attack on trust and misinformation and yet.

    26:14

    Speaker 1

    And there's like people who are just basically figured out how to, if you think about it, what you, what you mentioned with fraud is, is so interesting because you know, at some level there's sort of a technical aspect of this where where sort of companies are trying to build, make the Internet safer in itself.

    26:31

    And there's great work being done.

    And you know, people like to be harsh on the big tech platforms, Microsoft, Facebook, Apple, all these guys, but they do tremendous work on on making the core infrastructures better.

    Now, you know, as they grow and get more surface area, even if they do all that work, they are still, you know, just because of just the sheer size of it, more, more, more holes are are sort of popping into it.

    26:57

    And then worse is that as more and more people becomes part of the Internet, you know, unfortunately, the, the sort of the the weak links are in this many times humans, not always there's plenty of weak links in in the hardware or software as well.

    27:12

    But as humans and entering into you mentioned the elderly, all kinds of sort of groups with with less, less abilities, although frankly, lots of times it's highly, highly capable people who push the wrong button too.

    27:28

    So we shouldn't be too cocky and sort of knock on wood and we can all push the wrong button and not not just grandma, you know, beep.

    And I'm not saying that jokingly.

    I really mean it.

    The the but I think that that trust element there is something that the bad guys are taking advantage of.

    27:44

    So, so you would you talk about when you see these text messages?

    27:48

    The problem with text messages

    It is it's super interesting.

    I've been asking our guys to sort of pull on that thread here and we're doing a lot of work on that right now.

    You know, there, there are these the people, a lot of people being recruited out of India to come to, to.

    They were portrayed as sort of startups, as, as companies in Cambodia, in Laos, in, in Myanmar.

    28:10

    And they come there to do, you know, software development work and so on.

    And once they show up, they need, you know, they hand over their passport for various reasons.

    They end up in these, they end up being part of these call centers.

    They're fake call centers.

    So they're fraud centers and, and they may have, whether it's sort of they're working phones or they're working texting where they're basically running these, these massively parallel scams across a whole set of people.

    28:36

    And, and, and people refer to this as as pig butchering as sort of the, the scam.

    And, and 1st you hear that term, you're like, whoa, what the hell does that mean?

    Pig butchering?

    But it's so, and, and you know, you get these messages, you know, I get them too.

    It's like, how are you doing?

    28:51

    Yeah, wouldn't you care about how I'm doing, you know, sort of thing.

    But but and The thing is that these guys run this in massively parallel and and you know, you probably also had to try to have some fun with this.

    I'm doing great.

    How are you?

    You know, like in you try to take it down some illicit route or, you know, whatever just for the hell of it.

    29:10

    But, but in reality, what these guys are doing is they're willing to invest a month in, in building trust, building fake trust with you, and then eventually take it to a place where they can sort of use this to have you buy Bitcoin, buy gift cards, buy whatever, and then transfer that over to to them.

    29:28

    And, and by running this, these scams can end up being very profitable.

    And it's sort of the, and you can see now that they're starting to introduce AI on, on the other end of this.

    And there's going to be sort of a, it's not a denial of service attack.

    It's, it's a massive sort of, I don't know what to call it, but it's an AI enabled parallel fraud attack on society here.

    29:49

    And it's going to be super important to try to go after this.

    And it's difficult like to do when you go after cyber threats, typically there is sort of very distinct command and control infrastructure that it's carefully hidden, but it's it can be sought out.

    30:07

    And, and this is sort of one of the things we're really good at here at the recorded future in this world.

    The infrastructure is very subtle and it's sort of human oriented and it's much harder to pin down.

    There's like not an IP address to just kill and get sort of whether you take it out or, or you sort of get somebody to do a takedown on it and so on.

    30:26

    It's just like they jump to the next computer.

    It's just like not the point.

    So it's, it's a very hard thing to deal with.

    And and, and you, you're, you're right to bring it up because it's very difficult and very important.

    30:37

    Speaker 2

    Well, I mean, and you, you also hear about these deep fakes that you can now do with AI where you can impersonate somebody, a loved one, be just based on an image on Facebook of your child.

    And now you have a recording that shows your child asking for money so that they can be let go by kidnappers or something like that.

    30:54

    So.

    30:55

    Speaker 1

    Totally.

    And this is part of the same sort of, you could see that pig butchering scheme that I talked about using the same sort of deep fakes.

    But you can do these in any number of ways, any number of ways.

    And as you know, you're technically more than savvy.

    31:10

    You're very savvy.

    So you could imagine yourself sitting down with building AI agents for this.

    You're in clawed code or whatever they call it and say constructor system that unleashes 10,000, you know, agents that each one of them do the following doing, you know, and this is the feedback loop.

    31:28

    I want you to work and, you know, run controlled experiments, try it on 10 people, on 100 people or 1000 people.

    It's, it's, there's going to be a lot of responsibility on the AI companies to, to make sure that the AI infrastructure we are using here is not enabling the sort of the bad guys here.

    31:49

    I think we launched here earlier in the year a version of what we call our malware intelligence where we keep tabs on malware of the world.

    We have the largest sort of collection of that in the world.

    And part of that is we're very carefully then monitoring now the use of whether it's entropic or open AI sort of APIs in malware to see are they taking advantage of that?

    32:15

    Are they embedding AI, the sort of small engines in the malware?

    Are they they being used?

    There's any number of sort of ways that you can imagine that intelligence ends up being very key here because otherwise we could see, you know, AI being used in ways that are just very uncomfortable.

    32:35

    Speaker 2

    Yeah, I mean, it's terrifying, terrifying to think.

    32:37

    Trust in AI

    And I'm almost hesitant to ask you this question because I'm, I'm terrified of what you might say.

    But society is built on trust, right?

    I mean, it's a really everything from financial systems money.

    We trust that a dollar has a certain value today and I'll have a certain value tomorrow, and I can use it to purchase whatever is that I want.

    32:57

    Borders are built on trust, and they really built on social constructs that you and I decide that this is the boundary of my property and this is where yours start, and it'll be the same way tomorrow.

    So, so much about how we operate as a society and individuals within those societies and nations is built on trust.

    33:16

    And it feels like everything is attacking trust right now.

    33:20

    Speaker 1

    Yeah, no, no, you're right.

    And that's why like we call it sort of the threat convergence.

    Again, we like to talk about this and again, what's been going on in, in, in, in the Middle East here between cyber and war and disinformation and, and it's not always the three components at the same time.

    33:35

    But you know, if you put on your malicious mind and think about how could I really put to work, you know, the sort of the the tool sets out there to go out of country.

    How do I, you know, Russia when they invade a country has historically always denial done the sort of DDoS attacks, denial of service attacks on the banks to sort of take down the bank websites the night before invasions sort of being stupid, stupidly sort of predictable mode modus operandus.

    34:02

    Why not just do it the same sort of attack, but on trust of the banks, the crate runs on the banks before I go invade somebody to really to go at sort of and there's many ways to sort of the trust erosion that can be and it can be slow, it can be fast, it can be, you know, lots of different like any number of ways of of this.

    34:24

    The Black Swan scenarios here are uncomfortable so for sure.

    34:29

    Speaker 2

    Well, yeah, from that perspective, I guess those Black Swan scenarios, I'm sure there are all kinds of think tanks and pundits and experts and researchers in this field.

    34:36

    How will this play out in the next 5-10 years?

    What exactly?

    How are they talking about this attack on trust in society?

    And obviously organizations and people like you are working to fight that.

    But there is seems to be an endless supply of people trying to erode that trust.

    How does this play out over the next 5 or 10 years from your perspective?

    34:53

    Speaker 1

    You know, look, I, I've obviously sat here and talked and feels like I'm being a fear monger at some level.

    And I'm sure it sounds like I am, But at the same time, look, humanity has gotten through a lot.

    We, we, we took ourselves through the bubonic plague and it worked out, you know, like we've, we've done a lot of, you know, we've tried to make, you know, we started the First World War, we started the Second World War, we blew up atomic bombs.

    35:17

    We did all kinds of crazy stuff.

    And actually we came out on the good side of it.

    The, the, so, so we may talk about that.

    It's terrible that the Internet has, has sort of done some bad things At the same time, this globalization wave, as much as people, you know, don't love globalization, that saw, you know, vast amount of people in, in Asia come out of poverty and, and instead of being hungry every day, you're actually having a job.

    35:42

    And, and so lots of goodness have come out of that.

    And, and even if it's getting people out of hunger in Asia, that might actually have elevated economies in the US as well.

    So look, I actually sort of huge believer in all of this.

    We I unless technology plays a role and, and intelligence is, is 1 very important part of this.

    36:04

    In many ways, I do think that that the good news here is that it's more about humans.

    We need to make sure that we are, you know, teach our kids what it needs to be skeptical.

    I must say, I'm pretty thrilled about seeing my kids about how they question.

    36:22

    It feels like they sometimes question things better than some adults, you know, and, and I had the discussion with my daughter the other day about sort of like, and, and she was showing me this great chart.

    I think it was from the Pew Research Institute of the box of sort of trusted sources versus non.

    36:41

    And I immediately there was sort of like, and, and I think they trusted New York Times a little bit more than than Fox, but both of them were in the quadrant.

    And I'm like, so why do you think they put this one versus the other?

    And again, the Cambridge guy versus the whatever other guy is going to get all upset about that.

    36:57

    But, but it's this sort of and, and, but, and, and some of those answers are probably too simple, but I'm actually pretty optimistic about what how our kids are going to come out and be smarter than, than the guy in our generation sitting there doom scrolling.

    37:12

    So, so that's sort of there is an element of that.

    I think we need to spend more time teaching this in school.

    And it's not about teaching what source is good or bad.

    It's actually teaching how to question things.

    How do you understand things?

    How do you take it apart?

    How do you use the tools to your advantage here?

    37:28

    And it's sort of the basics basics teach people Voltaire rather than that click on a certain button teach the sort of the, the basics of, of philosophy or what have you.

    And so no, I, I guess I'm, I'm pretty optimistic.

    37:45

    I don't think there this this idea of, you know, this is where again, when when we start saying, let's, let's have at a governmental level filter out what's this information and not dude, that has, you know, we that that is just that takes you to a dark place every time you try.

    38:04

    You know, whether you're sort of communist or fascist, it doesn't really matter when you start filtering at the society level and saying my, my side of the equation is better than the other.

    That is, you know, whether you're Karl Marx or or and Stalin or Lenin versus your Hitler and Mussolini or whatever, it goes dark very, very fast.

    38:23

    So no, let's teach ourselves to be smart and questioning.

    And I am pretty optimistic where that's going actually.

    38:31

    Speaker 2

    You know, it's interesting because it's, it's a double edged sword because people are using AI for misinformation and everything you said.

    I completely agree.

    You know, with my kids, we make sure that they try to be as analytical and, and introspective and curious as possible.

    38:48

    But AI at the same time is making it so that, you know, people are less curious, right?

    Because I don't need to go and do the research.

    I don't need to read Voltaire.

    I can go to ChatGPT and say, Hey, tell me the things I need to know about Voltaire to write this paper.

    And actually, in fact, why don't you just write the paper for me?

    39:02

    Speaker 1

    Sad, isn't it?

    Yeah.

    Yeah.

    So.

    But I think this is where then teachers need to come along and instead of saying writing a paper about Voltaire, say, you know, write a paper about, you know, that understands, I don't know, encourage people to use AI.

    39:18

    If you want to use AI, please do so.

    But tell me how the use of AI impacted your paper writing.

    I don't know, you know, just like dive deeper, dive deeper.

    We we have to kind of go back now to you.

    You've sort of opened up to that question earlier.

    The genius out of the bottle here.

    39:35

    And and I think, you know, to go geopolitical on you again.

    You know, the good news, I was at a dinner here recently and in Boston, it was Axios had arranged to dinner.

    It was some great thinkers on AI and but they went down this path of, look, we need to guard rails around this and this and this and that.

    39:52

    And you could just see it get into a point where it's like, let's give up on AI and, and, and give it all to China and adopt using DeepSeek and it's going to be great.

    They're going to be celebrating Chinese holidays before the day is over.

    And and I was like, look, and these were some smart people going down this path, like some high, you know, they're teaching kids in university.

    40:12

    And I'm like, you guys are.

    Illusional.

    You know, this is sort of where we are now.

    There's three or four companies in the US, Open AI and Entropic and Google and a few others who are very good at this and it's only them who can pull it off the US government cannot pull this off.

    40:30

    You know, this is important U.S. government has always sort of been at the forefront of pretty much all tech They can't pull that off on their own now and then there's a bunch of Chinese companies there are no Russian companies who can do this.

    They're, you know, maybe, maybe Mistral in France there are no British companies, no German companies, certainly no Swedish companies, no Israeli companies, no Indian companies.

    40:50

    So it's a winner takes it all between 2:00 here potentially whether it it doesn't necessarily it's a non zero chance, non zero.

    What do you call it?

    Non 0% probability that that there is a winner takes it all.

    41:05

    And we need to make sure that that is with us and, and in that part, when it's with us, how do we then teach ourselves and be be the, the best at this?

    Because having our own AI, that is sort of whether it's written by Google or open AI or what have you, or hopefully competing between a few versus having DeepSeek doing this.

    41:26

    And you know, when you ask me, tell me about Xi Jinping and it says, how about if we instead instead talk about science?

    Like, no, I want to ask questions about Xi Jinping.

    Like that future is dark.

    It's dark, dark, dark.

    When the AI has to sort of adopt to Chinese socialist values, not a good thing.

    41:45

    Not a good thing.

    41:46

    Speaker 2

    When I think about people that are super idealistic trying to put the genie back in or saying like we should put guardrails, I just also think I'm like, I don't know what world you're living in, but that's completely unrealistic.

    And and not only unrealistic, it's dangerous, right?

    If enough people believe in what you're saying, start making policy around that.

    42:04

    It's ultra dangerous for the United States and countries that think the way that we do.

    42:08

    Speaker 1

    Totally.

    I think it's like trying to regulate the airline industry.

    Ella nineteen O 7 when the Wright brothers came along or nineteen O 8 was it, you know, imagine we regulated airlines, air airplanes.

    Ella 1910 Instead, they have to have, you know, like paper wings.

    42:25

    They have to have, you know, like the, you know, you, you just it's, it's just not.

    It's it's to your point, it's not just stupid, it's also dumb and dangerous.

    It's it's, yeah.

    42:36

    Speaker 2

    So, Christopher, this has been tremendous and I can talk about this for hours, but you've been so generous with time.

    And I just want to ask you one more question, which is a, a singular question that if I had one question to ask, Christopher, it would be this.

    42:48

    What is the one truth about how modern threats work or how we respond to them that most people don’t see, but probably should?

    You have a rare perspective watching global digital threats unfold in real time while also building systems that counter them.

    So with that vantage point, what's 1 truth about how modern threats work or how we respond to them that most people don't see, but probably should?

    43:03

    Speaker 1

    Difficult question to unpack because there's not a single simple answer to that.

    But the key thing is the modern threats are complex, sort of a non common but it's important is multi factorial.

    You can't really sort of do first principles thinking and just reduce them to 1 component.

    43:22

    There is pretty much always, even if we talk about AI, at least at this point, there are humans behind the threats.

    There might be a criminal behind the threat, the guy, a boy in the mom's basement in Yekatenburg sort of thing.

    There could be a whole intelligence agency behind it.

    43:39

    You might work at a place, whether it's a government agency or at a company, but there's 500 people in a department PLA office outside Shanghai whose only mission in life is to get into your, your, into your system.

    43:54

    And they're, they're willing to wait five years to make it happen.

    So the so they're humans behind and they're willing to to use very complex approaches.

    They might be using a combination of computer approaches, physical approaches.

    They're willing to over this three to five year time frame, get insiders into your company, recruiting people, getting what we've seen with N Koreans, getting IT workers in all kinds of different places to both make money on the IT worker, but also having people embedded in in these companies given with system access.

    44:27

    So, so the threats are complex.

    They're they're moving across all these dimensions we've talked about, we talked about the sort of the conversions between cyber and kinetic war type things.

    But this physical is probably the right word and misinformation, disinformation, information operations, what have you.

    44:45

    So it's complex.

    And, and so in that navigating that means that you, you have to be very thoughtful and, and think about this.

    And if you're defending, whether it's sort of, you know, a big company or big or government organization with lots of resources or it's a very small outfit, a little nonprofit that has high profile or what have you, or even thinking about yourself, we have to be very thoughtful in this.

    45:11

    So it's complex.

    I think that's all I can say.

    It's complex.

    45:16

    Speaker 2

    And it sounds like it, it sounds overwhelming too, in fact.

    But Christopher, it's been a privilege.

    You're a long time friend.

    It's so excited to catch up with you and looking forward to seeing you again soon.

    Thank you so much for being on a natural selection and congratulations and all the work that you've done so far.

    45:32

    Speaker 1

    Thank you so much Sir.

    Great fun.

    Thank you.

Nic Encina

Global Leader in Precision Health & Digital Innovation • Founder of World-Renown Newborn Sequencing Consortium • Harvard School of Public Health Chief Science & Technology Officer • Pioneer in Digital Health Startups & Fortune 500 Innovation Labs

https://www.linkedin.com/in/encina
Previous
Previous

Synthetic Biology: Harvard Wyss Institute • George Church

Next
Next

Pediatric Gene Therapy: Mass General