Neurotech: Institute of Futures Studies • Anders Sandberg

Intelligence Eats Reality: One Neuron at a Time

In this episode of unNatural Selection, we’re joined by Dr. Anders Sandberg — a researcher at the Institute of Futures Studies and one of the world’s most provocative thinkers at the intersection of science, philosophy, and the far future. His work spans neurotechnology, human enhancement, AI, ethics, forecasting, and even the search for extraterrestrial life.

We explore how biology and technology are no longer separate spheres, but merging forces reshaping the trajectory of our species. As innovation begins to supplant natural selection, reality itself is becoming more fluid — and the boundaries between human and machine, mind and system, begin to blur.

We also delve into Dr. Sandberg’s imaginative approach to science communication, from dissolving the Fermi Paradox to reimagining planetary engineering in “Blueberry Earth,” and how these playful yet rigorous ideas help provoke public dialogue on our long-term future.

This conversation reflects the core premise of unNatural Selection: that innovation is becoming the dominant evolutionary force — not survival of the fittest, but survival of the most adaptable, the most imaginative. This is Evolution by Design.

  • (Auto-generated by Spotify. Errors may exist.)

    Doctor Anders Sandberg is a researcher at the Meer Center for Long Term Futures at the Institute for Future Studies in Stockholm. His research at the Institute centers on management of low-probability, high-impact risks, societal and ethical issues surrounding human enhancement, estimating the capabilities of future technologies, uncertainty, and very long-range futures. Topics of particular interest include global catastrophic risk, existential risk, cognitive enhancement, methods of forecasting, neuroethics study, transhumanism, and future-oriented public policy. He was senior research fellow at the Future of Humanity Institute at the University of Oxford from 2006 to 2024. He is research associate of the Oxford Uehiro Center for Practical Ethics and the Center for the Study of Bioethics in Belgrade. He is on the board of the nonprofits All Fed and AI Objective Institute. He is on the advisory boards of a number of organizations that often debate science and ethics in international media. Anders has a background in computer science, neuroscience, and medical engineering. He obtained his PhD in computational neuroscience from Stockholm University, Sweden, for work on network modeling of human memory.

    Host: Anders, it's a pleasure to have you here in unNatural Selection.

    Anders Sandberg: Thank you for having me.

    Host: You know, it's funny because I always prepare some questions beforehand as I do the research. And in your case, you've written on so many diverse topics, from brain implants to forecasting to existential risk, and I want to cover it all. But just to level set for the audience and give them a sense of what motivates you, what need or impact drives your work?

    Anders Sandberg: I think the simple thing is I want to make the future better. The most important thing about the future is that it's the future. It's when we're going to live. It's when our grandchildren are going to live. It's when everything that matters to us is going to unfold. And the really scary thing is that the future is being shaped by our actions today. What we're doing right now, whether it's good or bad, is making the future better or worse. And I think that we should be trying to be wise about how we shape the future, not just accidentally run into it. And my research is essentially about asking, "How do we get better at getting a good future?"

    Host: That's incredible. And I think that what you speak of is perhaps the fundamental thing that we all share in common. It's a fundamental desire to have some control over our future, and an optimism about what's coming next. But it's also rooted in a deep understanding of probability. When you speak about low-probability, high-impact risks, you're not talking about science fiction. You're talking about real-world events. Can you give the audience a sense of what the existential risks are that you study and that you are concerned about today?

    Anders Sandberg: Sure. The existential risks are defined as risks that would wipe out the long-term potential of humanity. It could be by wiping out humanity entirely.

    05:01

    Anders Sandberg: It could be by permanently preventing us from reaching a good future. We are pretty unlikely to be wiped out entirely by just normal natural phenomena. Asteroids are there, but they're very rare. Super volcanoes are dangerous, but they probably won't wipe out everybody. The thing that is more dangerous are things like engineered pandemics or unaligned artificial intelligence. And in both cases, it's human activity that is creating the risk. And that's really important because that means we can also, through our activity, choose to manage the risk. We get to decide whether we're going to allow these risks to grow too large, or whether we're going to choose to manage them. My main worry is unaligned artificial intelligence because that looks like a permanent risk. If we invent super smart AIs that don't care about us, they might lock us into a future where we don't have control, or they might just replace us. It could be that the whole planet gets covered in solar panels and AI hardware, which is efficient, but not what we humans want. So that's one really scary one. Engineered pandemics is scary because it's a relatively easy technology. A few smart people in a lab can engineer viruses that are much worse than natural viruses, and that's an issue because it's so scalable and because it's so easy to do badly. And then there are things that are not so much about extinction, but about lock-in. For example, if we end up in a global totalitarian surveillance state, it might be very stable, and it might be very hard to get out of it. And that also means the long-term potential for humanity to develop would be capped. And that's also a big loss. That's why we call them existential risks, because they kill the rest of the life of the species, the rest of the life of the planet, which could be millions or billions of years. So a risk that eliminates just 99% of humanity is horrible, but it's not existential. We could recover from it. But if we permanently prevent a flourishing future, that's what makes it existential.

    Host: When you talk about the engineering of viruses, that's not to say that the intent is nefarious. It could just be an accident.

    Anders Sandberg: Yes, intent is hard to judge. Often, people have good intentions, but they make mistakes. And with engineered viruses, in particular, we are moving into a realm where the pathogens are not something that evolution has honed.

    10:01

    Anders Sandberg: They could be much more effective. They could be much more contagious. They could be much more deadly. And we also have so many labs around the world doing this kind of research. It only takes one accident in one lab to affect the entire world. And that's a problem that we need to think about, how we do lab safety, and especially biosecurity and biosafety standards around the world.

    Host: And you've done quite a bit of work on what is, in some ways, a solution for the biosecurity of the future: cognitive enhancement. Tell us more about cognitive enhancement.

    Anders Sandberg: Cognitive enhancement is essentially improving how we think, feel, and make decisions. This could be done by using technology, like smart drugs or nootropics. It could be done by using technology like brain stimulation or brain implants. It could also be done by non-technological means like just getting better sleep, or better education, or better training. And for me, cognitive enhancement is one of the ways we can actually improve the chances of a good future, because the problems we're facing are hard. Climate change is hard. Artificial intelligence is hard. Governing a world with 8 billion people is hard. And if we could be just a little bit smarter, we could solve those problems better. So, my interest in cognitive enhancement is as a way of boosting our ability to cope with the complex challenges of the future. The ethical issues are mostly about fairness. If some people get to be super smart, and others don't, that might lead to an even more unequal society. We need to think about how we can make these technologies available in a fair and equitable way.

    Host: And in that sense, are there parallels between the introduction of cognitive enhancement and the Industrial Revolution, where you had a divergence between those who were able to capitalize on the new technologies and those who were not?

    Anders Sandberg: Yes, that's a very good parallel. And I think we see this kind of dynamic whenever a powerful new technology is introduced. Initially, it's expensive, and only a few people can get access to it. This can amplify existing inequalities. If only the rich can afford to make their kids significantly smarter, then the gap in power and opportunity between groups could widen over time. The Industrial Revolution did, in the long run, improve life for almost everybody, but there was a period of intense dislocation and inequality.

    15:00

    Anders Sandberg: And we might see a similar kind of dislocation with cognitive enhancement, and other technologies like AI, where the benefits go initially to a small group of people. This is one reason why I think it's important to think about the policy around it. Should we be subsidizing certain kinds of enhancement? Should we be regulating them for safety? But also, how do we promote good uses? If we see that a certain kind of training or certain kind of drug makes people more cooperative or better at critical thinking, that might be something we want to spread widely. There's a lot of things to think about when it comes to the societal and ethical implications.

    Host: When you speak about brain implants, are you talking about something like a BCI, a brain-computer interface, or is this something more like an actual enhancement of the brain's internal structure?

    Anders Sandberg: Well, it can be both. A BCI, a brain-computer interface, is a technology that allows you to interact directly with a machine, like controlling a computer cursor with your thoughts, or having a memory chip that stores extra memories. That's a form of enhancement that's about adding a function. The other side is a more fundamental biological or chemical enhancement. This could be gene editing to improve brain function, or drugs that enhance plasticity and learning. In both cases, the goal is to make us more capable. And I think that both are likely to happen and both are very controversial. BCIs are already being developed for medical reasons, like helping people with paralysis. But if we can use them to give super-senses, or super-memory, or super-computational ability, then that's going to lead to a big discussion about who gets to have these powers.

    Host: It seems like the common thread is the idea of human agency and the ability to control our destiny. You speak about this being an "encouraging hubris," that we have to believe that we can be in charge of our lives.

    Anders Sandberg: Yes, I think that's a very good way of putting it. It's easy to be fatalistic and say, "The future is just going to happen to us." But if we look at human history, we've constantly been overcoming challenges and trying to shape our environment. The problems we face today, like climate change, are entirely self-inflicted, which means we can also choose to solve them. It's a kind of high-stakes game. We're playing for the entire future of humanity. So, we need to be very smart and very careful, but we also need to believe that we can do it. We need that "encouraging hubris" to try to get better at making decisions, and at managing these low-probability, high-impact risks.

    20:01

    Anders Sandberg: One of the big parts of that is just recognizing that we can do something. When we deal with existential risks, it can feel overwhelming. It can feel like, "Oh, a super volcano could wipe us out. What can I do?" But if we focus on the risks we have control over, like engineered pandemics or unaligned AI, then we can take action. We can fund research into biosecurity. We can advocate for smarter AI regulation. We can influence the trajectory of the future. And that sense of agency is, I think, very important.

    Host: In your research, you speak about the concept of "management of low-probability, high-impact risks." Can you give the audience an example of a risk that, while low-probability, you think needs immediate attention and management?

    Anders Sandberg: Well, unaligned artificial intelligence is the one that worries me the most, and I think it requires immediate attention. It's moving quickly. People are investing a huge amount of resources into building more and more capable AI systems, but they're not putting nearly as much into safety and alignment. If we create a superintelligence that optimizes the world for a goal that we don't understand or didn't intend, we might not be able to stop it. It's not about the AI becoming evil; it's about the AI being indifferent to human values. Think about ants on a path. The construction workers building the road don't hate the ants, but they pave over them. Similarly, an unaligned AI might accidentally pave over humanity in its quest to achieve its goal. We need to figure out how to program in human values—or at least, the value of keeping humans safe—into these powerful systems before it's too late.

    Host: That's a terrifying analogy. So, in effect, you're saying that the alignment problem is one of the most pressing engineering and ethical problems of our time.

    Anders Sandberg: Absolutely. The alignment problem is essentially asking, "How do we make sure that the smart systems we build want what we want?" It's a deeply difficult problem. It's not just about writing a few lines of code. It's about translating the complexity of human values—things like compassion, justice, freedom, happiness—into a mathematically precise goal function for a superintelligence. Nobody has figured out how to do this yet. And as the systems become more capable, the challenge becomes more urgent. We don't have infinite time.

    25:00

    Anders Sandberg: The faster we develop powerful AI, the less time we have to solve the alignment problem. It's a race between capability and wisdom.

    Host: You've also spent a considerable amount of time thinking about space settlement, which is, in some ways, the ultimate long-term future project. How does that fit into the management of existential risk?

    Anders Sandberg: Space settlement, having human settlements off-Earth, is a form of insurance policy against existential risk. If there's a global catastrophe on Earth—whether it's an asteroid, a nuclear war, or a super-pandemic—having a self-sufficient colony on Mars or the Moon means humanity can survive and recover. It's a way of hedging our bets. It's not a solution to the risks, but it ensures that the "long-term potential of humanity" is not completely wiped out by a single event. It also gives us a vast new arena for development and a new kind of future to explore, which I think is inherently valuable.

    Host: So, it's a way of increasing the total "surface area" of human civilization.

    Anders Sandberg: Exactly. It's about spreading our eggs into more than one basket. We've spent our entire history with all our eggs in one basket, the Earth. And it's a wonderful basket, but it's fragile. Having a second, independent, self-sustaining settlement would massively reduce the total existential risk to humanity.

    Host: Switching gears, your work also delves into neuroethics. When we talk about brain implants and cognitive enhancement, what are the primary neuroethical concerns that you believe we should be focused on today?

    Anders Sandberg: The main concerns revolve around identity, autonomy, and justice. When you start messing with the brain, you're messing with the very core of what it means to be a person. If a brain implant alters your personality or your memories, who is the "you" after the change? That's the identity question. The autonomy question is: Are you freely choosing this, or are you being subtly coerced, perhaps by societal pressure to "keep up" with enhanced people? And the justice question, as we discussed, is about access and fairness. If these enhancements are only available to the wealthy, it could create a biological aristocracy. We also need to think about security and privacy. A brain implant is a window into your mind, and it's a window that can be hacked. So, data security and mental privacy become paramount.

    30:04

    Anders Sandberg: It's a new frontier of ethical issues, and we need to have these discussions before the technology is fully deployed. We need to set the standards now.

    Host: And you've done a considerable amount of work on forecasting methods. When you look at the track record of past predictions for the future, why do you think we're so poor at it?

    Anders Sandberg: We're bad at predicting the future for a few main reasons. First, we tend to extrapolate, assuming the future will be a continuation of the past, which often misses big, discontinuous changes—the black swans. Second, we have cognitive biases. We tend to focus on what's visible now, or what we want to happen, rather than a more objective assessment. Third, and most importantly, the future is fundamentally about decisions and innovation. If I predict that we're going to have a massive climate disaster in 50 years, that prediction might inspire action today that changes the outcome. So, the act of forecasting can actually change the future itself, making the prediction wrong. Forecasting is not about predicting a fixed future; it's about exploring possible futures and understanding the levers we can pull today to get a better outcome.

    Host: So, it's more of a navigation tool than a map.

    Anders Sandberg: That's a perfect analogy. It's a navigational tool to help us steer. It's not a precise, pre-drawn map of what will be. If a ship's captain predicts they're going to hit an iceberg, they don't say, "Well, I guess we're going to hit the iceberg." They change course. We need to be like that captain.

    Host: And in your work, you also delve into the very long-range future—millions, even billions of years from now. What are the key concepts that you explore in that domain?

    Anders Sandberg: The very long-range future is mostly about the constraints of physics and astronomy. We're thinking about the fate of the universe. What's the maximum amount of life that could ever exist? How can we maximize the value of that future? It sounds abstract, but it helps us put our present-day problems into perspective. For example, if we wipe ourselves out tomorrow, we're not just losing the next 100 years; we're losing potentially trillions of years of valuable future. That scale gives us a moral imperative to survive and to manage our risks. The concepts involve things like the potential for colonization of the galaxy, the limitations of energy and computation in a far future, and the idea of "astronomical waste"—the resources we are failing to use to promote life and consciousness throughout the cosmos.

    35:00

    Anders Sandberg: It's a way of saying, "Let's zoom out and ask: what's the grandest possible project humanity could ever embark on?"

    Host: That's a spectacular vision. You speak about the need to be wise about how we shape the future, not accidentally run into it. And when you look at current public policy, especially in the US and Europe, do you see policymakers being wise, or do you see them accidentally running into the future?

    Anders Sandberg: That's a tough question. I see pockets of wisdom. I see people trying to be very thoughtful, especially around AI safety and certain aspects of climate policy. But in general, policy tends to be very short-term oriented, driven by election cycles and immediate crises. The long-term is often neglected. We tend to spend a lot of time fire-fighting, and not enough time planning for the things that are still far away but could have massive impacts. For example, the biosecurity issue: It's a complex, international problem that requires global coordination and long-term investment, and it's very hard to get policymakers to focus on that when they have to deal with the next fiscal budget or the next election. So, I would say it's a mix. We're not accidentally running into it completely, but we're certainly not steering as effectively as we could be. We need institutions that are dedicated to thinking about the really long term, beyond the next few years.

    Host: You've been a vocal proponent of what's called transhumanism. For the audience, can you briefly explain what transhumanism is?

    Anders Sandberg: Transhumanism is essentially the view that we can and should use technology to overcome our biological limitations. It's the belief that we can and should become "post-human," or at least significantly enhanced. It’s a very simple idea: we look at all the problems of humanity—disease, aging, limited intelligence, suffering—and say, "Why don't we try to fix that?" It's a technological project, a philosophical stance, and a cultural movement all rolled into one. It embraces enhancement, life extension, and the idea that our current human form is not the final stage of evolution.

    Host: Is there a specific limitation that you believe, in the near term, we can most readily overcome with technology?

    Anders Sandberg: I think the most readily overcome limitation is cognitive bias. We're terrible at thinking rationally and seeing the full scope of a problem, especially when it involves uncertainty and the long term.

    40:01

    Anders Sandberg: We have amazing formal tools like probability theory, decision theory, and forecasting methods, but we're bad at using them consistently. We let our emotions and biases get in the way. So, I think the most powerful near-term enhancement is not a neurochip, but better epistemology—better ways of thinking and knowing. This could involve using decision-support software, training in critical thinking, or even certain psychological interventions to reduce cognitive biases. If we could just make our collective thinking 10% more rational and less biased, I think the effect on managing risks like climate change and AI would be profound.

    Host: That's a fascinating answer. I would have thought aging, but I can see how, if we can't solve the problem, extending our lives would just extend the problem.

    Anders Sandberg: Exactly. Extending our lives is a fantastic goal, but if we don't also get smarter about managing the future, we're just going to have old, biased, and shortsighted people making the same mistakes for a longer period of time. So, wisdom and intelligence need to be the priority. Aging is a very close second, though. It causes immense suffering, and I believe it is a curable disease.

    Host: In your work, you delve into the idea of managing long-term futures, and you speak about the concept of uncertainty. How do you model and manage uncertainty when you're looking millions of years into the future?

    Anders Sandberg: Uncertainty is everything in the long term. The classic way to deal with it is to not predict a single outcome, but to paint a spectrum of possibilities—what we call "scenarios." We try to identify the major variables and how they could branch. For example, will we settle space? Yes/No. Will we develop super-AI? Yes/No. This gives you a set of four major futures, and then you can analyze what strategies work well across all those scenarios. This is called robust decision-making. Instead of trying to find the best path in a predictable world, you try to find a path that is good enough in a highly uncertain world. Another tool is to identify "pivotal points" or "branch points"—moments in time where our action has a disproportionately large effect on the future. AI development is one of those pivotal points right now. Focusing our limited resources on those points is a key part of managing uncertainty.

    Host: Pivotal points—that's a really interesting concept. Can you give us another example of a pivotal point that you see on the horizon?

    Anders Sandberg: Another pivotal point is the governance of technology, particularly in a world that is becoming more polarized and multipolar.

    45:00

    Anders Sandberg: If we get into a situation where different major powers are locked in a technological arms race—say, an AI arms race—the pressure to cut corners on safety and alignment becomes immense. The development of norms, treaties, and institutions for global technological governance, or the lack thereof, is a massive pivotal point. If we can establish some shared rules of the road for dangerous technologies, we improve the future greatly. If we fail, we could be looking at a very unstable and dangerous century.

    Host: Given all the risks and challenges, from unaligned AI to engineered pandemics, what is the source of your optimism? What makes you look at this and say, "We can do this?"

    Anders Sandberg: My optimism comes from two main things. First, the incredible power of human problem-solving. We have solved so many problems in the past that seemed absolutely insurmountable at the time, from smallpox to widespread famine in many parts of the world. We have this amazing capacity for innovation and learning. Second, the sheer amount of future that is at stake. When you realize that we are talking about potentially trillions of years of life and value, even a small chance of success is worth fighting for with every ounce of our being. The potential gain is so enormous that it makes the effort worthwhile. It's not a naive optimism, but a willful, strategic optimism rooted in a belief in human agency and our track record of overcoming challenges.

    Host: That's a powerful statement. When you speak about the trajectory of the future and how our actions today are shaping it, how do you manage the trade-off between near-term needs and long-term survival? For example, a developing nation may need to use cheaper, polluting energy now to lift its population out of poverty, but that clearly accelerates climate risk for everyone's long term.

    Anders Sandberg: That's the classic trade-off, and it is a moral and economic challenge. The key is to find solutions that are win-win for both the near term and the long term. For the energy example, the solution is to make clean energy so cheap and accessible that it is the most economically viable option for every nation, right now. It means massive investment in R&D to drive down the cost of solar, batteries, and other clean technologies, making the polluting choice obsolete. The same principle applies to other areas. If we create safer AI systems, that will make AI more trustworthy and useful in the near term, unlocking huge economic benefits, while simultaneously protecting the long term. It's about clever innovation that collapses the trade-off, turning a zero-sum game into a positive-sum game.

    50:00

    Anders Sandberg: It's a huge task, but I think that is the only robust way forward. It's not about telling people to sacrifice; it's about giving them better tools.

    Host: And you also do a considerable amount of work on neuroethics and the idea of personal identity. Do you think that technology, such as brain implants or extreme cognitive enhancement, fundamentally changes what it means to be human?

    Anders Sandberg: It certainly can change what it means to be human, and that's the whole point of transhumanism—to transcend our current limitations. But I don't think it fundamentally changes what it means to be a person. We've been changing ourselves with technology for millennia, from clothing to written language to glasses. Technology is part of the human condition. A brain implant, for instance, might give you new abilities, but the core of your identity—your values, your memories, your relationships—those are what make you you. The question is, how do we integrate these powerful new technologies without losing the core values that we cherish? I think we will evolve into something quite different, but the process will be gradual, and the new "human" will still be a continuation of the old, just with a vastly expanded toolkit. The challenge is managing the speed of change, so we don't create a massive societal and personal shock.

    Host: What are the ethical standards or governance mechanisms that you believe need to be put in place now to manage this exponential change?

    Anders Sandberg: We need a few core things. First, proactive, interdisciplinary assessment of new technologies. We shouldn't wait for a crisis; we need to bring engineers, ethicists, policymakers, and the public together before the technology is deployed to understand the risks and benefits. Second, flexible governance mechanisms. Because the technology changes so fast, we can't rely on slow, rigid laws. We need adaptive regulation, like "soft law" guidelines and international norms that can be updated quickly. Third, a global commitment to safety and alignment research, especially for AI. It needs to be mandatory, well-funded, and a core part of the development process. And finally, a focus on equitable access. If these powerful technologies are developed, we must have policy mechanisms in place to prevent them from becoming tools that just widen the gap between the rich and the poor.

    55:00

    Anders Sandberg: It's not just about regulating against the bad; it's about steering towards the good.

    Host: That's a great point. You speak about the governance of technology. What role do you see international organizations, like the UN, playing in the development and deployment of these new technologies?

    Anders Sandberg: International organizations have a vital, but difficult, role. Technologies like AI and engineered pandemics don't respect borders, so the solutions must be global. The UN or similar bodies are essential for convening power—bringing all the major players, especially nation-states, to the table to agree on common norms and standards. They can facilitate information sharing about best practices and emerging risks. They can also provide a framework for verification and monitoring, for instance, of advanced biological labs or AI development efforts, to ensure compliance with safety norms. The difficulty is that they are often slow and lack enforcement power against powerful nation-states or private companies. So, while they are necessary, they are not sufficient. We also need multi-stakeholder initiatives involving industry, academia, and civil society to move quickly and set voluntary standards that can later inform global policy.

    Host: It seems like you're speaking about an evolution of governance itself, where it moves beyond nation-states and traditional mechanisms to something more agile.

    Anders Sandberg: Absolutely. The rate of technological change is outstripping the rate of institutional change. Our governance mechanisms are still largely optimized for the 20th century, but the problems are 21st-century, exponential problems. So, we need to be innovating in governance itself. This is sometimes called metagovernance or agile governance. We need to invent new institutions and new ways of cooperating that can respond to fast-changing technologies without stifling innovation. It's a grand challenge for political science and public policy.

    Host: In your vast research, what is one surprising fact or concept that you've discovered that you think the public should be aware of today?

    Anders Sandberg: One surprising fact is about humility in prediction. When you look at the track record of experts making long-range predictions, they are often no better than chance, unless they are willing to update their beliefs constantly and seriously consider views that contradict their own. The people who are best at forecasting are often those who are intellectually humble and use structured thinking tools. The public tends to value confident, charismatic prophets, but the data shows that confidence is often inversely correlated with accuracy. So, the surprising concept is that we should put less faith in confident gurus and more faith in humble, structured thinking.

    1:00:00

    Anders Sandberg: That applies to everything from stock markets to climate change.

    Host: That is a fantastic point. The wisdom of humility. You also speak about the importance of being able to fix problems as they arise. You use the analogy of inventing the seatbelt for a car. Can you explain that in the context of emerging technologies?

    Anders Sandberg: Yes, the seatbelt analogy is simple but profound. When the car was invented, it created a massive new problem: traffic fatalities. A purely fatalistic approach would be to say, "Well, speed kills, so we must ban cars." A purely hubristic approach would be to say, "We will invent a car that never crashes." But the smart, engineering-minded approach was to accept the car's existence and try to mitigate its most immediate harm. So, we invented the seatbelt, the airbag, and better roads. These were small, targeted, and highly effective interventions. For AI, the analogy is: we can't stop AI development, but we can invest in "seatbelts" like better alignment algorithms, interpretability tools, and fail-safes. For engineered pandemics, the seatbelt is better biosecurity standards and rapid vaccine platforms. It's about fixing the immediate, obvious failure modes—the "low-hanging fruit" of safety—while we work on the deeper, long-term problems. You don't have to solve all the problems of the car (like traffic jams or pollution) to at least prevent people from flying through the windshield.

    Host: That's a great example of a robust decision strategy. It focuses on the near-term fixability to ensure long-term survivability.

    Anders Sandberg: Exactly. It's a kind of damage control. We're going to make mistakes. We're human. The key is to design systems that are resilient to our mistakes.

    Host: So, you think that having a bit of humility about our ability to predict the future, together with the encouraging hubris that we must have to actually try to be in charge of our lives and our future, even though we fail at that, we quite often get better results than if we just left everything to chance or to fatalistically say, "Oh, what happens, happens." That usually means that somebody else decides what happens for you, and that's usually not good for you.

    Anders Sandberg: It may seem like a random walk, but at the same time, there are evolutionary pressures that are driving us in one way or the other. In some cases, also stimulating thought or innovation to try to overcome certain challenges, whether it be climate change or viruses or any number of different things.

    1:06:30

    Host: But with all of that, Anders, this has been a privilege, incredibly stimulating and fascinating conversation. The work that you're doing is tremendous. I don't see how you have the time to cover so many different fields. So with that, Anders, thank you very much. Thank you for your work, and thank you for spending time with us today here in unNatural Selection.

    Anders Sandberg: Thank you.

Nic Encina

Global Leader in Precision Health & Digital Innovation • Founder of World-Renown Newborn Sequencing Consortium • Harvard School of Public Health Chief Science & Technology Officer • Pioneer in Digital Health Startups & Fortune 500 Innovation Labs

https://www.linkedin.com/in/encina
Previous
Previous

Digital Health Platforms: Verily • Andrew Trister

Next
Next

Home Entertainment: Blockbuster • James Keyes