Episode #184 - Is Artificial Intelligence really an existential threat?


Audio Block
Double-click here to upload or link to a .mp3. Learn more

Is Artificial Intelligence really an existential risk?


This episode challenges the popular idea that technology is a neutral tool, instead asking whether each technological advance carries its own moral trajectory based on the power it grants and the systems it shapes. Building from last episode's critique of ChatGPT’s limitations, the discussion explores how narrow AI, like large language models, might evolve into general intelligence not by mimicking humans but by developing new, non-human forms of intelligence. Referencing thinkers like John Searle, Daniel Dennett, Sam Harris, and Stuart Russell, the episode lays out the major concerns surrounding AGI: the alignment problem, instrumental convergence, and the limits of containment. Through vivid analogies—like AI viewing humans the way humans view bees or bugs—the episode paints a chilling but plausible picture of an intelligence that may surpass human understanding without ever meaning harm. It concludes by reframing the AGI debate as part of a larger question: how should we relate to technology in an era when innovation vastly outpaces regulation? Whether or not AGI is ever realized, the conversation calls for a new level of responsibility, reflection, and global coordination around the release of high-stakes technologies.

Further Reading:

  • The Alignment Problem: Machine Learning and Human Values by Brian Christian (2020)

  • Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)

  • Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark (2017)

See the full transcript of this episode here.


Thank you to everyone who makes this podcast a possibility in the future.

I could never do this without your support! :)

Previous
Previous

Episode #185 - Should we prepare for an AI revolution?

Next
Next

Episode #183 - Is ChatGPT really intelligent?