Episode #184 - Transcript

So I want to start out the podcast  today with a question that we’ll return back to at the end of the episode. 


You know you talk to some people about technology in todays world…something goes wrong with a technology something horrible happens…and you ask people what they think about it and they say you know, technology ITSELF is not bad. Technology… is a tool. It’s neutral. Whether it’s good or bad; that just depends on how someone’s using it. And OUR job… as a people…ALL WE CAN DO… is to try to incentivize the GOOD actors in the world and DETER the behavior of the bad. We’ve ALL heard people SAY this kinda stuff before but the question is: SHOULD we really be THINKING about it that way?


SHOULD we be thinking of technology as this class of neutral things that can’t be judged as good or bad…or is it possible EACH piece of technology carries with it a type of latent morality just given the capabilities that it HAS to affect the society it’s a part of? You have to ask yourself: is TikTok a neutral piece of technology? Are nuclear weapons neutral? Do we have the LUXURY anymore of THINKING about technology in this way?


Hopefully these questions and me being awkwardly dramatic right now will make a LOT more sense by the END of the episode but for now… lets build a bit of a FOUNDATION for the discussion we’re having today. 


Last episode we talked about ChatGPT…and we got past what’s an initial intuition that someone might have whose just getting IN to these conversations about AI. That WHEN ChatGPT TALKS to me and it SOUNDS like it’s an intelligent person…that IT MUST…in IT’S processing unit…be DOING the SAME kinds of things that intelligent PEOPLE are doing in their BRAINS… when THEY have a conversation. 


Now CLEARLY that’s NOT what ChatGPT is DOING. But it should be SAID…the fact it’s not doing all of that…that DOESN’T make ChatGPT unintelligent…and it CERTAINLY doesn’t make large language models not…scary. 


There’s a type of person that comes to the defense of ChatGPT here…  a really, really SMART kind of person they say: “well, how do YOU know ChatGPT isn’t doing EXACTLY what we’re doing? Do you know for sure? Hmm? Maybe all we DO as people is just probabilistically interject”


THAT’S not the best argument if you want to DEFEND ChatGPT…you know to try to make a case for why it’s ACTUALLY DOING what people are doing…no, the BETTER argument… is to say that WHAT MAKES ChatGPT scary…is that it doesn’t HAVE to be doing what a human being is doing…because we’re NOT, actually, aiming for reproducing a human intelligence, at ALL necessarily. 


Let me explain what I mean. Two points we gotta talk about and then we’ll bring them together. 


The FIRST point is this: when John Searle last episode talks about syntax and semantics… and how the person INSIDE the Chinese Room can’t POSSIBLY understand Chinese from just sitting there in a room manipulating symbols. 


Someone could say BACK to that…and they’d be evoking the most COMMON response BACK to the chinese room argument called the systems response…they could say yeah… you’re right… the person in the room doesn’t understand a WORD of chinese…but neither does any SINGLE subsystem of my brain understand the language I’M using right now. 


The neurons that make UP my brain don’t understand language, the single PARTS of my brain don’t…but the whole SYSTEM…DOES. 


Could it be that the LEAP from syntax to semantics…goes on at the level of the entire SYSTEM instead?


That’s the FIRST point, hold onto it for a minute, here’s the second point:


IF we’re making the case that we’re not actually AIMING for creating a human intelligence anyway…well then what sort of intelligence ARE we trying to create? Seems important to ask the question: what IS intelligence, more fundamentally? 


And what you quickly realize as you start trying to ANSWER that question is that… coming up with a rigid definition for intelligence… is about as fruitful as it was for Socrates when he was harassing people back in ancient Athens. Nobody EVER agrees on a definition in these conversations…but there ARE a couple important things to take from the people that have tried. 


One thing that CERTAINLY seems to be the case…VERY important… is that the definition of intelligence is CLEARLY NOT just: “whatever human beings are doing”. Meaning human intelligence is not the ONLY kind of intelligence. Intelligence is something more BROADLY definable than that. 


Intelligence exists in animals, it exists in complex systems in nature… and it’s not controversial to think that it MAY be able to exist in machines as well.


So how DO we broadly define intelligence if we were going to try?

And again there’s A LOT of people in these conversations that do their best to give a definition…I HOPE people are satisfied enough with THIS one as a preliminary definition I just want to be able to have a conversation today. Is it TOO far off to say intelligence is the ability of something to understand, learn, solve problems, adapt to new situations, and generate outputs that successfully achieve its objectives. Hopefully that’s close enough to get started. 


Well under THAT definition… then something like ChatGPT is already intelligent. The question becomes just a matter of degree. And when philosophers and scientists that are in the business of building machines that are progressively more intelligent…when THEY talk about this stuff… they TYPICALLY divide intelligence into three broad categories: narrow intelligence, GENERAL intelligence and SUPER intelligence. 


Narrow intelligence is something like ChatGPT, or image recognition technology…or a chess computer. Look a chess computer can outperform EVERY HUMAN…it can beat the best players at chess in the world… if all it has to do is play CHESS against them. Which is to say: if it’s narrow intelligence is confined to a CLOSED system with a set of rules like the game of chess. That’s NARROW intelligence. 


But creating a GENERAL intelligence like we have…one that’s navigating the open world, setting goals, learning, adapting…THAT’S something most computer scientists think is just entirely DIFFERENT. That its a WHOLE different LEVEL of intelligence…and a whole different architecture of how the information is processed. 


And NOW to bring these two points together:

 

A person coming to the DEFENSE of ChatGPT here might say… that JUST because ChatGPT hasn’t achieved GENERAL INTELLIGENCE yet…doesn’t mean that it’s not a MAJOR step in that direction. 


This person might say it doesn’t take much to imagine this narrow intelligence of ChatGPT… that by the way is doing TONS of stuff it wasn’t even optimized to DO…imagine IT… linked together with 50-100 OTHER narrow intelligences, each performing different FUNCTIONS, all of them communicating with each other, within a larger SYSTEM. Could it be… that THAT is where general intelligence emerges?


Consider the similarities here to the theories of consciousness that we’ve already talked about. To Daniel Dennetts multiple DRAFTS model in our consciousness is an illusion episode…where multiple, parallel processes, ALL communicating with each other, create the ILLUSION of our phenomenal consciousness. Point is: could it BE that our mind and our SYSTEM of GENERAL intelligence is made up of various subsystems… and that no ONE of those subsystems EVER understands EVERYTHING that the whole BRAIN is doing, LIKE the person in the Chinese room…but that in aggregate…is it possible these things could work together to produce something…that’s capable of navigating the open world in a general intelligence sort of way?


No matter WHAT side of this debate you’re on…at THIS point your ANSWER to that question HAS to be…that we just don’t KNOW. And on TOP of that CONSIDER the fact…that in order for general intelligence to be a thing…we don’t even NEED one of these machines to HAVE consciousness or a mind. All we NEED is general intelligence. 


So there’s people out there who will say oh, we got nothing to worry about with AI. Cause these things are still decades away from EVER being able to do what a human MIND is doing…and THOSE people are probably RIGHT. 


But HUMAN intelligence… is a straw target people say in these conversations…that’s not even what the people working in this area are AIMING for anyway.


Where the MONEY is, the quadrillions of dollars its estimated, and where the scientific PRESTIGE and political POWER is in this field…is not in cracking HUMAN intelligence…but in solving GENERAL intelligence. Which… to say it another way: we’re not trying to create a PERSON here, get THAT idea out of your head…we’re trying to create an entirely different SPECIES. 


And THAT’S WHAT MAKES…Artificial Intelligence IN it’s current form… a pretty scary thing. 


The people at the top of these conversations about the risk of AGI…these are NOT PEOPLE who typically are screaming from the rooftops…the ROBOTS are COMIN MAN! Come on down into my basement with me I got canned peaches we can live here forever!


No, these people are usually very reasonable. What THEY’RE saying is…given the impoverished state of our understanding of how the mind or intelligence even operates…we DON’T KNOW, we WOULDN’T know how close or far away we ARE from a General Intelligence. Our PREDICTIONS certainly aren’t a good sign because we keep on being WRONG. 


And UNFORTUNATELY… it’s NOT good enough on this one to just wait it out…and see what happens…because on THIS one…the person on the AI risk side of things would say that if we DON’T have these conversations now and understand the unprecedented STAKES of the situation…we may not HAVE a world to be able to FIX this situation in as we go. 


There’s a lot of philosophers that have tried to paint a picture for people of what it would be like to live in a world with super intelligent machines. And if you’re somebody SKEPTICAL of this whole possiblity, bear with me for a moment. As we DO on this show we’ll talk about the counter points but I just think it’s important to visualize what this world might feel like to someone who’s living in it. Helps to make this whole discussion… a little more REAL for people. 


But one philosopher who’s FOCUS on painting this picture in recent years, an absolutely brilliant communicator among other things: is Sam Harris. Now, Sam sets up this thought experiment… by asking people to accept what HE thinks are two very simple premises.


The first premise…is that substrate independence is a reality…meaning that there’s nothing magical about the meat computer of the brain that ALLOWS for this general intelligence. That a general intelligence could be RUN on something like silicon mirroring a neural network. That’s the first premise. The SECOND premise…is SIMPLY that we just KEEP MAKING…incremental progress. As he says it doesn’t have to be Moore’s law…it JUST has to be consistent progress in the direction of a general intelligence…and EVENTUALLY he says…we will BE at HUMAN levels of intelligence… and by necessity, far GREATER levels of intelligence. Super intelligence eventually. 


It’s easy to picture all the GOOD things about having a super intelligence on your side. Picture a world without disease, inequality, even death. It all sounds so wonderful. It’s also easy to picture  all the super intelligent ways this thing would be coming for your canned peaches if it wanted to.  


So maybe the more interesting thing to do, what Sam Harris asks people to do…is just to imagine yourself in that initial place of ambiguity…try to imagine yourself standing in the presence… of one of these super intelligent beings. Try to picture it in your head right now. What do you picture? What does it look like? How do you feel?


Well one thing to keep in mind as you’re FORMING that picture in your head… is that this thing…WHATEVER it looks like…would NOT be constrained by biology in the same way that you or I are. It doesn’t need to have two legs and two arms. Remember… this is NOT a person. This is an entirely different SPECIES that we’ve created. This thing could look like a fire hydrant. It could look like a floating orb if it wanted to. It doesn’t even have to necessarily TAKE a physical form. 


But let’s say that it DID. And let’s say that it had EYES that are expressive of how its feeling in the same way a mammals eyes are expressive. How would this thing look at you…how does something look at a creature that’s THOUSANDS of times less intelligent than IT is? How does a general intelligence without biological restrictions, with cognitive horizons your brain can’t actually even BEGIN to comprehend, how does THAT thing look at you? 


Well, it wouldn’t look at you like a lion would look at you. Like a predator that wants to eat you. This thing doesn’t want to eat you. 


It wouldn’t be like looking.. into the eyes of a bear…when a bear looks at you, at least BLACK bears up here in the northwest…they DON’T usually look at you like they want to EAT you…a BEAR looks at you with eyes that are kind of curious, like they got nothing to worry about in the entire world…and they’re just kinda looking THROUGH you right now. You’re like a netflix show they’re mildly interested in. 


But when it comes to a superintelligence, it SEEMINGLY, wouldn’t even look at you like that…given time to study you it would already KNOW almost everything about you. Maybe the CLOSEST comparison is that it might look at you… the same way WE look at something like a Honey Bee. Living in a hive. 


Quick question for you: do you think honey bees worry about racism going on in the public school system? I mean, why not? Why not? It’s an incredibly important moral issue that’s FACING us right now. Are honey bees just uncaring? 


No, obviously…a BEE… doesn’t have the cognitive capacity to CARE about stuff like that. What IT cares about is what’s going on in the world of its HIVE. And while these moral dimensions CLEARLY exist at OTHER levels of intelligence, for other TYPES of intelligent creatures…there is no amount of explanation that is going to bring a honey bee up to speed on why this is an important issue that needs to be addressed. 


Point is: WE HAVE TO ASSUME…that the moral dimensions of a superintelligence, would be similar in comparison to OUR level of intelligence. We have to assume that this thing would be progressively learning about its surroundings, progressively adapting, coming up with new goals…and that these goals could be of a scope that are literally impossible for our brains to fathom. 


You know Sam Harris compared it at one point to the relationship between BIRDS and human beings. How if you were a bird…and you HAD the ability to THINK about your relationship to human beings…you are essentially living every day of your life just hoping that human beings don’t find something that they like MORE than the existence of birds. And that to a BIRD… our behavior has to be pretty confusing… sometimes we look at them. Sometimes we don’t. Sometimes we kill huge numbers of them for what to THEM must seem like ABSOLUTELY no reason at all. 


Picture you now are living alongside a BEING…. that is like THAT to you. Where you are looking at this thing…and you really are like a cat that is watching TV. It all looks very familiar to you…but you can’t possibly comprehend the true depth of what’s going on. 


The SIMPLE place for the brain to GO here is to say: well that’s TERRIFYING. I mean what if this thing decides one day it doesn’t like us and it just kills us all! 


Maybe I’ll be really NICE to it. You know I’m kind of CHARMING when I want to be. Maybe if we’re all really NICE to this thing it’ll cure cancer for us and make new shows for us to watch! 


Why would this thing be MEAN to us if we never give it a REASON to be mean?


But the MORE interesting thing to consider Sam Harris says… and this is an adaptation of an idea from the computer scientist Stuart Russell…but he says that an AI that is on the level of SUPER intelligence…wouldn’t even need to HAVE malicious intent towards humanity in order for it to be DANGEROUS to us. 


He says you know in the same way… you buy a plot of land and you build a house on it…and that AS you’re BUILDING that house… throughout the whole process you kill hundreds if not THOUSANDS of bugs…and it’s NOT because you have any evil feelings towards BUGS…you just don’t consider their existence…the goal you’re trying to accomplish is of a scope and a level of importance where the bugs you’re going to have to kill is not even something that crosses your mind. 


A superintelligence wouldn’t NEED a nefarious motive to be able to DO things that makes it’s EXISTENCE dangerous to people. 


Now somebody could say back to all this wow…wow that’s a wonderful picture that you’ve painted there. A great FANTASY world where a superintelligence exists…but this stuff isn’t going on in the REAL world yet. And there’s a LOT of assumptions you have to MAKE to be able to GET to this point in the fantasy. 


This person might say it STARTS to almost FEEL… like you guys are LARPING over here. You guys have heard of larping right? Those dudes that go out into the woods in wizard costumes create a mythical world that they can fight in and shoot magic missles at each other. I’m SURE you’ve all seen the videos of people LARPING on youtube. Point is: IS that whats going on here? ARE these AGI risk people… just a bunch of people creating a fantasy world… where they can argue with each other about this stuff…playing a bunch of different characters where anybody ELSE that wants to play HAS to accept a WHOLE TON of PREMISES before they GET to where they’re HAVING these conversations?


Well let’s TEST that criticism…and let’s DO it in the form of a dialogue. CLASSIC format in the history of philosophy. And AS you do in a dialogue let’s hear from BOTH sides of the argument: one one side from the AI RISK person who’s ALARMED by the idea of general intelligence…and the OTHER side from the skeptic…who’s not BUYING any of this. Let’s start with them.


The skeptic could say…for you to be SCARED about the possibility of an AGI in today’s world…you have to be making a TON of assumptions about HOW this thing’s gonna behave…and MOST of those assumptions come DOWN to the fact… that you’re PROJECTING your HUMAN way of thinking …ONTO this thing that is NOT gonna be thinking like a human. That’s your problem!


For example just to name one of MANY…the SURVIVAL instinct we’re all born with. How can ANYONE in their right mind assume that we’re gonna TURN this AGI ON for the first time… and that it’s gonna be THINKING like a survival oriented creature like WE are?


How can you worry about something like a preemptive strike against people because it’s SCARED it’s gonna get turned off? That sounds like science fiction. I mean you HAVE to remember this thing is essentially a CIRCUIT board. You GIVE this thing a GOAL… and it executes it. If we don’t PROGRAM all these nasty SURVIVAL instincts into it…it wont ever HAVE them!


But somebody on the AI risk side of things might say back: well you’re right that people project their humanity onto these things WAY too much…but what you’re saying about survival…is just not true…and we KNOW it’s not true…. because we’ve run the experiments to test it out. 


The concept’s called Instrumental Convergence. The idea is simple. Whenever ANYTHING, even an artificial intelligence is given some sort of primary GOAL that it has to carry out. There are certain SUB GOALS, lower level goals that NATURALLY are REQUIRED to be able to carry out the PRIMARY goal. Survival almost ALWAYS ends up being one of them, though there are a LOT of goals that instrumentally converge. 


The philosopher Eliezer Yudkowsky writes, “Whether your goal is to bake a cheesecake, or fly to Australia, you could benefit from matter, energy, the ability to complete plans and not dying in the next five minutes.”


So the person on the AI risk side of the argument could say even if the AGI DOESN’T have a survival instinct programmed INTO it…it will STILL have a desire to survive so that it can carry out whatever goal IS programmed into it. And by the way you can FIND this SAME distinction between Nietzsche and Schopenhaur 150 years ago. 


But then the skeptic could come back… and say okay, point taken. But honestly…survival is the LEAST of it…how about all the OTHER things people start to assume about AI? That it’s gonna be hostile. That it’s gonna want to be in charge…that it’s gonna be thinking like a chimpanzee in terms of hierarchies like WE do? Why are you assuming that…look just because your DAD had toxic masculinity…doesn’t mean my ROBOT’S gotta have it. 


Because look: something having a higher level of intelligence… does NOT necessarily MEAN that it’s gonna be HOSTILE towards everything that’s less intelligent. For example, a DEER…is FAR MORE intelligent than a predatory insect like a spider. But when you’re out camping in the woods…nobody ever worries about a deer coming and attacking you while you’re asleep. Looking for your leftover salad. 


This person might say this is transparently, just another example of us being scared of our OWN worst tendencies as people, manifesting in something else that’s way stronger than us. But we can PROGRAM this AI to NOT have those tendencies. And we have NO REASON to ASSUME this thing WOULDN’T HAVE a moral framework that’s COMPASSIONATE towards other living things. Where it USES that SUPERINTELLIGENCE… to come up with creative ways to accomplish its goals and NOT mess with anything else. 


And I think somebody on the AGI risk side of things might say okay, I guess…you’re right that we can’t KNOW whether we turn this thing on and it is just the nicest being we’ve ever come across. And we CAN’T KNOW for sure whether AS this thing extends into its practically infinite cognitive horizons, if for THOUSANDS of years it will just always have our well being as it’s number one priority. It’s NOT that any of what you just said is impossible…its just…as Sam Harris says…a STRANGE thing to be CONFIDENT about. 


We don’t know if it will GO that way. We don’t know if it WON’T. Which is WHY…EVERYTHING you’ve been talking about so far by the way, the survival instinct…the chimpanzee level hostility…in other words whether or not this intelligence is ALIGNED with human values…this is one of the BIGGEST conversations that’s going on in this area. 


If we ACCEPT this thing is more intelligent and powerful than we can possibly comprehend…the QUESTION is HOW do we make sure it’s VALUES are NOT ONLY aligned with our values right now…but HOW do we make sure that goes on endlessly into the future…even as our values change? This is often called the ALIGNMENT problem by people HAVING these conversations.  


And it can be TEMPTING to think: look we’ve been doing moral philosophy now for thousands of years. Don’t we have some common sense VALUES we can program INTO this thing and not have to worry about it? Isn’t that what all those hours rambling about trollies was supposed to be about? Objective moral ideals?


But it’s not that simple. As Eleazar Yudkowsky puts it: we don't know how to get internal, psychological goals into systems at this point...we only know how to get outwardly, observable behaviors, into systems. 


So, if there was some sort of domino effect that happened that we didn’t see coming where a super intelligence suddenly EMERGED over the course of the next 30 days…we wouldn’t know… how to program these values IN… even if we HAD them, which we don’t. And EVEN if all human beings could AGREE on which values to put in, which we CAN’T…we’d STILL be in a place …where we’re at the mercy of EVERY unintended consequence that you can possibly imagine… and EVEN all the ones you DIDN’T imagine. 


There’s a famous thought experiment in these conversations that illustrates this point by the philosopher Nick Bostrom. You’ll hear about it if you’re TALKING to people about AGI. It’s called the paperclip MAXIMIZER. Imagine a world where people are trying to align an AGI and they want to play it safe so they give it what SEEMS like to THEM… to be a totally SIMPLE goal to accomplish. They tell it to make paperclips. And by the way, get as EFFICIENT as you can at MAKING those paperclips… and make as MANY of those paperclips as you possibly can.


It starts out great…the things making paperclips. But THEN it starts improving. Starts mining the earths resources…eventually develops nanotechnology, eventually starts mining human BEINGS as sources of carbon to find ways to make more paperclips. The point of the story is: EVEN something as seemingly innocuous as making paperclips, when the stakes are AS HIGH as when you have a superintelligence executing this stuff that is out of your control…even making paperclips can have devastating unintended consequences. 


And if paperclips are too theoretical for you how about a real world example…you could command a superintelligence…to cure or eliminate cancer, and while you’re DOING it…don’t kill any human beings in the process. Seems like some totally reasonable set of parameters NOBODY could be mad at. 


But then imagine the thing finds a way…. to decapitate human heads and keep the people alive inside of jars…but hey, now they don’t have a body anymore…and I was looking at stats…MOST of the cancer goes on in the BODY. Cancer numbers are WAY down since the heads have gone in jars! I don’t understand what you’re so MAD about?


Now it should be said there’s a lot of people currently WORKING on solutions to the alignment problem. The field seems to be moving AWAY from the idea that we’re EVER gonna be able to come up with strict protocols that account for EVERY unintended consequence imaginable. Both Stuart Russel and Eliezer Yudkowsky are people doing good work in this field if you’re looking for someone to read.


But that said…let’s get back to the SKEPTIC of all this that was in the conversation before. Another POPULAR opinion that people like to say back at this point…. when they’re confronted with JUST HOW COMPLICATED it is to ALIGN a system like this…is that maybe that’s true, but maybe we don’t really need to be PERFECT when it comes to alignment. 


I mean given how COMPLICATED the alignment process seems to be…maybe our REAL goal should be… to just be able to CONTROL the AGI. And THAT seems EASY! As cousins of chimpanzees we are EXPERTS in the violence realm. 


We KNOW this thing would HAVE to run on electricity at LEAST at first. When the thing starts to show signs of getting out of hand…can’t we just UNPLUG it? And that’s a silly way of saying… can’t we launch an EMP at it? Can’t we just shut down the internet or shoot missiles at it? I mean is this REALLY that complicated? We just don’t ever let the thing get too powerful!


People on the AGI risk side of things may ask back do you really think that’s ACTUALLY how it’s gonna go down? Stuart Russell says just from a game theoretical perspective…that’s KINDA like you saying I’m gonna play chess against this supercomputer that’s WAY SMARTER than me, WAY better at chess…but don’t worry the SECOND it starts to BEAT me too bad…I’ll just checkmate it. It’s like…if you could CHECKMATE it on command… you wouldn’t need to be PLAYING against it in the FIRST place to benefit from its intelligence. It’s a STRANGE THING to be CONFIDENT about…that we’ll just NUKE the thing if it gets out of hand, problem solved!


To which the skeptic could say back okay I was joking about the missiles. But aren’t there OTHER ways to CONTROL an AGI…ways that involve us PREVENTING it from ever getting out in the first place? Wouldn’t THAT be a viable strategy?


And YES IT IS. In theory. That’s why NEXT to the ALIGNMENT problem another BIG area of conversation that’s going on in this field is known as the CONTROL problem or the CONTAINMENT problem. This is why a lot of private companies that are DEVELOPING this technology, keep this stuff LOCKED UP… inside of a black box. 


But even with black boxes…the SAME problems start to emerge. You imagine a superintelligence TRAPPED inside of a black box. How do you know this is a PERFECT black box with no way of escaping? Software has bugs. Do we really put ourselves in a spot where the future of humanity relies on a computer coder being perfect once? How do you know… you’re not going to be PERSUADED by this superintelligence to let it out? 


Stuart Russel says to imagine YOURSELF… being trapped in a prison where the warden and the guards are all five year old kids. Now you’re WAY more intelligent than them. You know things they can’t POSSIBLY come up with on their own…and you look at the way they’re acting…and they’re acting in a way that is totally dangerous to themselves, to everyone AROUND THEM, they’re about to burn the whole place down. 


Do you from INSIDE of your prison cell try to teach them EVERYTHING they need to FIX the situation…do you teach them all how to read through the bars…teach them how to use power tools, teach them why its important to share…do you do it THAT way? Or is the more RESPONSIBLE thing to do to convince one of these dumb five year olds to let you out of the cage…and then HELP them all once you get outside? Again, the thing doesn’t even need to have MALICE towards human beings for it to NOT want to be contained. So BOTH the alignment AND the containment problem… are FAR from things that are SOLVED that we can just wait around on.


And look ALL THESE CRITICISMS from the side of the skeptic…the person concerned about AGI wouldn’t see all this as ANTAGONISTIC. At the end of the day from THEIR point of view…this IS the skeptic PARTICIPATING in the discussion about AGI alignment and containment. They just are at an earlier stage of their understanding of the issue and welcome to this level of the discussion. 


Now we STARTED this section of the episode by accepting a point from Sam Harris. That there’s only TWO major premises you need to ACCEPT… substrate independence and continued progress… and that if we ACCEPT those we will CERTAINLY, with enough time, be able to produce a general intelligence one day. 


But it NEEDS to be said. There’s people that disagree with that. They’d say he’s making a LOT more assumptions by making THOSE assumptions. He’s smuggling in the computational theory of mind, he’s assuming intelligence is something reducible to rote information processing, he’s glossing over some of the possibilities of embodied cognition…but these people would say that EVEN IF we’re just gonna stick to those two premises he mentioned…those two things are STILL FILLED with a TON of IFs… that we have NO IDEA whether they’re EVER gonna become WHENS. And that those things NEED to be acknowledged. 


For example we have NO IDEA whether the massive progress we’ve seen in the last few years in the field of AI… is just a bunch of low hanging fruit. That we’re going through an INITIAL phase with the software where we’re seeing TONS of development…but that very soon we’re gonna hit some sort of WALL where there is a point of diminishing returns. Seems LIKELY that could be a possibility. I mean the ONLY example of general intelligence that we HAVE took BILLIONS of years of evolution. Which is NOT an argument that it HAS to take that long…it’s just an argument trying to illustrate the potential complexity we’re dealing with when it comes to organizing concepts, the ability to create NEW concepts, link them into EXISTING concepts in a meaningful way…that’s a pretty big task that’s not yet being done. 


Another IF is when it comes to hardware. Maybe we SOLVE the SOFTWARE side of this AGI in the next couple years…but maybe it takes centuries …or is IMPOSSIBLE… to have the hardware that can RUN this sort of software. Or maybe it’s just so expensive to run it will never be scalable.


This kind of person could say BACK to Sam Harris: all you’re DOING here is indulging in a very self-justifying LOOP of arguments. You’re saying hey, it’s POSSIBLE for a superintelligence to be among us one day…so let’s have a bunch of conversations about how the WORLD might end… IF that happens! 


But under THAT logic…why not just be worried about EVERYTHING? Why not worry about genetic engineering and chimeras taking over the planet? Why not worry about facial recognition technology? Why not worry about gain of FUNCTION research? 


And to Dr. Sam Harris…I mean.. I’m not trying to speak for him here, this is just my opinion about what he might say: but I think he might say…Yes. We SHOULD be worried about ALL those things you just mentoined! And not worried like we’re being DOOMSAYERS about this stuff running around panicked, but of course…we SHOULD be CONCERNED about our relationship to science and technology. 


Keep in mind…this is the same Sam Harris that back in 2015 is doing an interview, and at the time it may have just seemed like a throwaway comment to many…but he’s talking in this interview… and he’s saying how when we’re facing the upcoming election in the united states and we’re trying to decide who we want at the helm of the ship in terms of candidates…that we HAVE to consider the possibility of something drastic happening, something like a SUPERBUG or a pandemic breaking out in the future that we’re NOT prepared for that CAUSES a global catastrophe. THAT’S all the way back in 2015. This is CLEARLY a man doing his WORK… that is self-aware of this MOMENT that we’re living through…where we have to RE-EXAMINE our relationship to technology… because there has NEVER really been THIS MUCH at stake when a new piece of technology comes out… as in 2023. 


And this brings me back to the question we asked at the beginning of the episode. IS technology…simply neutral. Is it just a TOOL to be used for good or evil…and it’s OUR job to incentivize the good actors and eliminate the bad. OR has technology BECOME something that we NO LONGER have the PRIVILEGE to have a REACTIONARY policy towards?


Because that’s TYPICALLY how this works, right? A new technology is developed. Usually in the business world where the only gatekeeper between the public and a new piece of tech is some ambitious CEO driven by a profit motive. And then the widespread USE of the new product is seen as a sort of EXPERIMENT our society is running… in real time. And then any NEGATIVE effects that start to show up the government takes a look at them… and passes regulations to try to protect the public AFTER the fact. 


FANTASTIC STRATEGY. When the stakes of technology are low. Back when we’re making vacuum pumps in the 1600’s. Back when someone’s buying a blender or a TV set for their house. But how about with AGI? With nuclear technology? Autonomous weapons? Bioengineering? 


Technology is not a NEUTRAL thing. Foucault says to always be skeptical of the things that have power over you that CLAIM to be neutral. People in science and technology studies have said for DECADES now that a new piece of technology ALWAYS carries with it certain “affordances”. Meaning that when something new comes out…there’s always NEW things that it allows people to do that they couldn’t do before…but it also takes AWAY things that people USED to be able to do. 


And to the skeptic in this episode saying that it’s SILLY to worry about the future possibility of artificial general intelligence…that anybody WORRIED about it is buying into a WHOLE LOT OF IFs…that haven’t yet become whens. Someone on the AGI risk side of things might say that given the stakes of the technology that we’re producing…we don’t have the luxury of waiting for those IF’s to become whens.


We have to HAVE these conversations about AGI now…because THIS IS the new level of MINDFULNESS we need to apply to new technologies… BEFORE they’re released to the public or used in an unregulated way where people get hurt. 


And look, EVEN if ALL of this stuff with AGI NEVER happens…say we hit some sort of a wall where AGI is impossible and this era’s looked back on as a bunch of dumb tech-obsessed monkeys larping in the woods…these conversations are STILL producing RESULTS when it comes to reexamining our relationship to tech. For example, think of the alignment problem, and how that SAME line of conversation may have SAVED us from some of the perils we’ve experienced with social MEDIA over the years. 


If your ALGORITHMS on a social media feed are aligned to the goal of MAXIMIZING the engagement of the viewer…then if misinformation and political antagonism is what GETS the population to engage with stuff the MOST…then THAT’S what people are gonna see! Think of the conversations around the CONTAINMENT problem of AGI and the nuclear power disasters of the 20th century. 


In the SAME WAY we in the united states have an FDA…and the idea is that it acts as a barrier so that when we buy something in the store we’re not buying, you know, uncle cletus’s home brew where he’s just in his cellar stirring up a pot of aspirin..do we need SOME sort of BARRIER in place when it comes to the release of new technologies? And IS the GOVERNMENT…the best one to be providing that SERVICE? 


In a world where technological innovation is what we DO…how do you create these BARRIERS to releasing technology… without at the SAME TIME, STIFLING technological innovation? 


These are ALL interesting questions to talk about with your FRIENDS. But make no mistake these are ALL important questions about the time you’re living in that if people exist in the future they will look back on the time you’re living and say those people did the work of reexamining their relationship to technology, and boy aren’t we glad that they did. I’m certainly glad you chose to listen to a podcast today talking about it. 



Next time we’re gonna be talking about NARROW AI and it’s potential impacts on society. SOME that are going on right now. I know I said last time that was gonna be a part of THIS episode. Well this is more for you to listen to I’m certainly grateful that you decided to listen to a podcast about this… it just seems like some useful information for people to have being alive today. But I want to close today by acknowledging one point that we haven’t brought up at ALL this episode…and it MAY at FIRST glance seem like an obvious solution to ALL of this: why not just BAN…AGI?


I mean we put moratoriums on OTHER tech that’s dangerous. It’s illegal to clone people. It’s illegal to build a nuclear reactor in your garage. Why don’t we just do the same thing with AGI?


Well FUNNY you should ask. That’s ANOTHER thing that makes this a particularly COMPLICATED brand of existential risk. We CAN’T…stop…at this point. 


We are living in a world… with quadrillions of dollars up for grabs to ANYBODY who has the insight and the free time it takes to CRACK this thing. Couple that with the fact that this is NOT LIKE a nuclear reactor or cloning where you may need a TEAM of scientists to be able to monitor and regulate it. NO with AGI…GPUs… are everywhere. Knowledge about algorithms and how to improve them… you don’t gotta be in a SECRET SISTERHOOD to be able to get ACCESS to this stuff. So the ONLY thing some people feel like we can do…is just to win the race to the finish line. There’s talk about maybe putting a temporary PAUSE on this stuff, but everybody knows even THAT would only be temporary. 


What seems certain… is that this is nothing like Chernobyl or Three Mile Island. Where everyone’s gung ho about nuclear technology until there’s a couple major accidents and then governments around the world pump the brakes in the typical, REACTIONARY policy sort of way. 


And IF you’re somebody just frothing at the mouth to WIN this technological arms race…cheerleading the tech barons of the world all the way to the finish line…that’s fine…but there may come a day where you find yourself face to face with the kind of species we talked about earlier in the episode…staring down the barrel of something…. that looks at you far more like you’re an elementary school science project…than a human being. 


Thank you for listening. Talk to you next time.


Previous
Previous

Episode #185 - Transcript

Next
Next

Episode #183 - Transcript