Episode #183 - Transcript

So no doubt since November of 2022… with the launch of ChatGPT and the job the media’s done to make everybody and their mom aware of large language models and developments in the field of AI…no doubt many of you have heard the news…and no doubt MOST of you… have tried OUT something like ChatGPT and had a conversation with it. 


And maybe after HAVING that conversation…you felt pretty impressed. Like wow…this thing seems like a REAL PERSON. It’s giving me some pretty well formed, coherent RESPONSES to all the stuff I’m asking it. 


I mean this is NOTHING like that paperclip that used to harass me on windows XP service pack 2, this things comin’ for my FAMILY. Maybe you’ve even seen those conversations people have had with an AI…where its PROFESSING its love for the person asking it questions…you should leave your family. They don’t love you like I do. That REALLY happened. 


Maybe you’ve heard about people recently talking about the possibility…of us BEING on the VERGE…of AGI. Or artificial GENERAL intelligence…the LONG prophesied stage of DEVELOPMENT in the field of AI… where things are apparently gonna go from what people call WEAK AI…these are things like calculators, watches, things that can SIMULATE some single ASPECT of human intelligence…and STRONG AI…a different LEVEL of AI that has the ability to understand, learn, adapt…AI that can implement knowledge on a level that matches or EXCEEDS that of a human being. And maybe you’ve HEARD about all the doomsday scenarios as to what’s gonna go down if something like that were to ever be invented. 


But is that really something we gotta be worried about… right now? Are we really on the verge of a technological singularity… where artificial intelligence becomes an invasive species that WE’VE created. Are we CURRENTLY in a technological arms race to create something that’s thousands of times more intelligent than we can ever HOPE to be with goals of a scope we can’t POSSIBLY begin to understand? Is ChatGPT… just the first iteration… of an amoebae that will eventually evolve into ALL THAT… if just given enough time?


To answer THAT question…the FIRST place we gotta start with is a far less AMBITIOUS question…and that’s NOT whether ChatGPT is on verge of breaking out of its black box and taking over the world…but whether or not machines LIKE ChatGPT… are INTELLIGENT, in the SAME WAY a human being is intelligent. Are these machines REALLY DOING…the SAME stuff WE are doing when we solve problems? And on a more FUNDAMENTAL LEVEL than that: maybe as you’ve HAD a conversation with one of these things, being the kind of person that listens to a SHOW like this…maybe you’ve asked the very philosophical question…hey FORGET about understanding or intelligence for a second…I wonder as I’m TALKING to ChatGPT…if this machine is THINKING…in the same way that I’m thinking. Like what’s going on when there’s the three dots blinking and its loading…when it’s PROCESSING and DETERMINING what to say next? Is that the computer THINKING? Is THINKING something that’s rigidly definable by the way I’m thinking with MY brain? Or can THINKING be defined in MANY ways?  


IF we want to answer these questions…and if we don’t want to spend our entire lives FALSELY equating OUR intelligence with artificial intelligence…then the FIRST step, philosophers realized LONG ago, was going to be to pay VERY CLOSE attention to EXACTLY what it is that computers are DOING.  


Bit of historical context here: this modern version of the conversation around whether machines can think, or are intelligent, essentially BEGAN with the work of a guy named Alan Turing. Turing was an absolute genius mathematician… and back in HIS time back in the early 1900’s, he was fascinated by stuff going on in the philosophy of mind and mechanical engineering. He was ALSO… a TOTAL visionary. I mean it’s clear in MANY ways he foresaw the age of digital computing DECADES before it even happened… and BEING in that place of awareness…SEEING the writing on the wall back then HE came up with… what HE thought was an INEVITABLE question… that people were going to have to eventually take SERIOUSLY if things kept going in this direction…the question was: how would we know if machines were intelligent… if they were in fact intelligent? 


It’s actually a pretty DIFFICULT question to answer the more you think about it. I mean at what point does something go from weak AI, something like your alarm clock…to something that can have an understanding of things? To something that is intelligent, to something that has a mind? How do you even begin to ANSWER those questions?


Well if we WANNA figure out an answer: we gotta start somewhere… and Alan Turing came up with an idea for a way to TEST for machine intelligence. You’ve probably heard of it…it’s called the Turing test…it’s famous at this point. 


The idea is that if you’re having two different text conversations, one talking to a human being, the other talking to an AI…if someone is FOOLED by an AI into thinking that they’re TALKING to a real person…then that AI has PASSED the Turing test… or in other words we can now SEE that AI, as something that possesses, intelligence. 


His thinking was look, if we want to know if something’s intelligent or not…if an AI can BEHAVE intelligently, to the extent that it FOOLS something intelligent…then at THAT point, what are we even splitting hairs over, let’s just call the thing intelligent! And it seems like a PRETTY safe place to begin morally speaking too…let’s TREAT something like it HAS intelligence, if it’s BEHAVING like an intelligent CREATURE.  


But it wasn’t long before philosophers started noticing PROBLEMS with the Turing test. Among them was a guy named John Searle, he comes along in the mid 1980’s and IN his work… he’s in PART responding to the Turing test when he asks a very important question that would CHANGE this whole discussion: he asks is it REALLY true…that if a machine BEHAVES intelligently…then it must be safe to assume that it IS intelligent? 


See because as we’ve talked about…we have pretty good reason to be skeptical about that assumption…remember the end of last episode with the example of looking at a self driving car…and how from the outside, it may APPEAR to be making free choices based on a kind of libertarian free will, it’s parallel parking on its own, it’s avoiding accidents, it’s checking traffic on different routes to where you’re going…but that despite how it may look, we KNOW it ISN’T making free choices because WE are the ones that programmed it…in the same sort of way…here’s John Searle in the 1980’s saying: maybe when we come across a computer that PASSES the Turing test, maybe THEY ALSO… only APPEAR to be intelligent from the outside. 


But how could that be the case if it were true? John Searle goes to work and lays out what at THIS point has become a FAMOUS distinction in these conversations about AI…it’s the distinction between SYNTAX… and SEMANTICS. Digital computer programs, to Searle, DO NOT OPERATE with an UNDERSTANDING of the physical world like you or I do. Computers operate on the level… of a formal SYNTAX. Which is to say…that they read computer code. Computer code that’s made up of symbols, ultimately of 1’s and 0’s. 


But there’s no point where a computer understands the MEANING of those symbols in physical, causal reality.


In fact if you THINK about it that’s PART of what makes computers and computer PROGRAMMING such a powerful tool in the FIRST place. They can run ANY NUMBER of different programs, on a TON of different types of computer hardware, and the FACT these things are SO INTERCHANGABLE… is ONLY made possible by the fact that these programs… are written in a way where they are fundamentally, manipulating 1’s and 0’s. 


In other words…to John Searle…computers are capable of manipulating symbols in AMAZING ways at this level of SYNTAX…but that doesn’t say ANYTHING in the SLIGHTEST about that computer’s ability to understand the, semantic MEANING of ANYTHING that it’s producing. This is syntax vs semantics. 


Take a calculator as a basic example of this. Everybody knows a calculator is capable of a SUPER HUMAN level of calculation when it comes to a single aspect of human intelligence, solving arithmetic…and it does it’s JOB it produces certain outputs when given certain INPUTS by following a preprogrammed set of rules. 


But NOBODY out there actually thinks that a CALCULATOR understands what mathematics is. Nobody out there thinks the calculator knows the MEANING of these calculations and how they’re significant to human life. No, a calculator’s… just running numbers. ‘=


No matter HOW powerful of a calculator you have, I mean you can have a FULL RIDE scholarship to MIT and have NEVER spoken to another human being in the last five years…and John Searle would say EVEN your calculator is nowhere NEAR powerful enough to make the MOVE from syntax to semantics. Because it has NOTHING to do with PROCESSING power. The calculator EXISTS at the level of syntax, NOTHING more!


Now you may say BACK at this point wait wait wait…John Searle, I hear what you’re saying. I get it with a calculator…but hold on…when I’m talking to something like ChatGPT…that thing is OBVIOUSLY…WAY DIFFERENT than a calculator. I mean I told this thing what was in my refrigerator…this thing wrote a shopping list for me. This things telling me about gravity…about social issues…CLEARLY this thing has an understanding of the OUTSIDE WORLD and what all of this stuff MEANS. 


But John Searle might say back…are you entirely SURE about that? And to explain why he’d ASK that question: enter one of the most famous parables that’s been written in this modern period of the philosophy of mind…and it was INTRODUCED by john searle…it’s called the Chinese Room argument. Here’s how it goes:


Imagine yourself sitting in a room alone and for the sake of Searle’s example…also imagine that you don’t speak a SINGLE word of Chinese. For me, that wasn’t too difficult to do. Now imagine in this room you are fed little slips of paper under the door by people on the other SIDE of the door. And that ON these slips of paper…are mysterious SYMBOLS that you don’t understand…unbeknownst to you these are actually questions being written to you in Chinese.


Your job in this room…is to produce a RESPONSE to these slips of paper. Basic input/output. Now despite not knowing what ANY of these symbols mean…it doesn’t matter to you that much. Because in the middle of the room is a table… and on the table you have a GIANT BOOK, WRITTEN IN ENGLISH, with a sophisticated set of RULES and PARAMETERS to follow for the manipulation of these symbols. Unbeknownst to you these are just RULES that allow you to respond in Chinese to the questions written in Chinese. 


You take a slip of paper…you identify the symbols inside of the book…you follow the rules the book gives you for which symbols are the proper RESPONSE to these symbols…and you send ANOTHER slip of paper back OUT of the room with your response on it. Basic input/output. 


Point is if you were given enough time to process the information… despite not speaking a WORD of the language, you could be sending SLIPS of paper out of this room with RESPONSES written on them… that were INDISTINGUISHABLE from the responses of a native Chinese speaker. They’ve actually RUN this experiment in the real world…and it works, the person on the other side thinks they’re SPEAKING to a person who knows Chinese. 


So why does any of this matter? Well to John Searle, Alan Turing was wrong. The Turing test …does NOT tell us that a machine’s intelligent. I mean sure… with a sophisticated enough set of rules and parameters manipulating at the level of syntax…a computer can CERTAINLY produce responses that are INDISTINGUISHABLE from those of an intelligent person. But in light of the Chinese Room…would that in ANY way… PROVE that a computer had intelligence? Would it prove that it had ANY understanding whatsoever about what it was saying? Is there ANY reason to believe, no matter HOW powerful of a processor it had…that it somehow magically made the move from syntax to semantics? 


To Searle the answer is NO. And he had SEVERAL different goals in this area of his work…I mean, ASIDE from trying to get to the bottom of exactly what machines are capable of and what exactly we MEAN when we talk about INTELLIGENCE or UNDERSTANDING in machines…maybe the most IMPORTANT point that he’s trying to defend with all this is to try to protect the conversation of what EXACTLY CONSTITUTES…a MIND. And at what point does a computer…BECOME a MIND? Does it even WORK that way? 


On one hand, from a scientific perspective, as it stands: we only really SEE minds in highly complex information processing systems. But does that mean that a computer, that’s an information processing system… can make that jump from MERELY information processing…to having a MIND that EMERGES because certain FUNCTIONS are occurring. This way of thinking is one small TYPE of what’s known as FUNCTIONALISM, and SEARLE’s not a fan of it just for the record. 


As HE describes it…the idea of this TYPE of functionalism is that there are people out there who believe…that it doesn’t matter WHAT the material CONDITIONS are AROUND this information processing…doesn’t matter WHAT it IS carbon, silicone, or transistors on a board…these people think if the RIGHT collection of inputs and outputs are going on…a MIND spontaneously emerges. This is no doubt the TYPE of mindset that leads people to suspect that something like ChatGPT may be DEVELOPING, a level of understanding and intelligence that CONSTITUTES a mind. 


But Searle asks the question…what if I made a computer… out of a bunch of old beer cans and windmills and ropes and I tied them all together? Put on transducers so this thing can perceive photons, sensors so it can feel vibrations…if you ran all the necessary inputs and outputs THROUGH this thing…would a MIND spontaneously emerge? To him the answer is clearly no…CLEARLY there are at least SOME material conditions that need to be met in order for what WE experience as a mind to be possible. 


This is in PART a conversation about substrate dependence. To criticize John Searle for a second…why should we assume that the TYPE of information processing going on inside of the brain that PRODUCES what we experience as a MIND…can ONLY go on in biological matter? I mean, THAT seems a little anthropocentric…doesn’t it? 


But Searle would want to give a few clarifications. He’s NOT saying that MINDS only exist in humans…OR that they can ONLY exist in biological matter. What he IS saying is that it’s just FALSE to try to equate the information processing of computers with what a HUMAN brain is doing. THAT’S a leap that we have ZERO BASIS to be making. 


And this TENDENCY for people to try to DO SO in our modern world…or for them to say it’s just RIGHT AROUND THE CORNER…they’ll say oh, we just need MORE information processing, MORE sophisticated rules…and THEN the mind of a human will emerge from a computer…most of that’s coming from the metaphors people use when thinking about things that are mysterious…LIKE the human mind. 


Yet ANOTHER big nod to the work of Susan Sontag and others here…but John Searle says we do this IN EVERY GENERATION when it comes to the mind, we COMPARE it to some popular piece of technology that seems complicated to us at the time. 


He said when HE was a kid everybody compared the mind to a telephone switchboard…THAT was how the mind works. You read Freud’s work a little earlier and he was comparing the mind to hydraulic systems and electromagnetism. He says Leibniz compared the mind to a mill. He says the people that think the mind is just the right software being run on the right hardware…all THOSE people are doing is committing the same mistake in OUR time. 


But Searle thinks there’s FAR MORE to it than there just being MIND.EXE running on the right hardware. Without going TOO much into it so we can stay on the topic of AI…HE suspects… that minds are a higher level feature of the brain and thus REQUIRE our biological makeup to be able to exist in the way they do. That…yeah, we only SEE minds in complex information processing systems so far…but we ALSO only see minds so far in BIOLOGICAL information processing systems, so despite some people being seemingly OBSESSED with making the MIND into merely a type of software…it may be that there’s something about our biology that we just need. As he says you can create a computer MODEL of a mind but you can’t yet create an ACTUAL mind. In the same way he says that you can create an elaborate computer MODEL of digestion and how it works…but you give that computer model a piece of pizza…it’s NEVER gonna be able to digest it. 


But ANYWAY…the conversation about syntax and semantics CHANGED the way that people were TALKING about Artificial Intelligence. And there were critiques of the Chinese Room Argument… we’ll talk about them on a future episode. But for the sake of understanding ChatGPT it’s important to understand in many ways… this CONVERSATION went on over the years to become EVEN MORE complicated than it was in the 1980’s…mostly BECAUSE of ADVANCEMENTS in the sophistication of the software people are trying to COMPARE to the human mind. 


See, someone in our world today could EASILY say okay Searle…I see what you’re saying and all…and FAIR POINT…back in 1985. You know, when Steve Jobs is at the pawn shop selling his little circular, ironic glasses so he can pay RENT that month I’m sure it was a great point THEN. But this is 2023…things like ChatGPT, LLMs…these are not computer programs like back then. These days we got things like machine learning. ChatGPT is trained on BILLIONS of parameters of information. This kind of person may say the GENIUS of how these things WORK…is that they’re given MASSIVE amounts of information about how the world is, they use their INCREDIBLE level of computational ability to look for PATTERNS in the data…and then they use PROBABILITY… to PREDICT what the next WORD will be in a given sequence considering how all the OTHER sequences of words looked in their training data. And these things get BETTER as they go. They learn from their mistakes. CLEARLY this is something VERY DIFFERENT than what we had in back in 1985…and CLEARLY… this is starting to knock on the door of what we MEAN when we say intelligence, RIGHT? Isn’t MOST of what we DO as people just pattern recognition from massive amounts of experiential data? 


I mean it can SEEM like the sky’s the limit here. Just GIVE this machine the sum total of all the wisdom in the history of the world…give it EVERY useful experience a person has ever had…and then ask it to SO LVE for EVERY scientific problem and give us a total understanding of the universe. 


But just like SEARLE asked about the computers back in 1985…there are philosophers in 2023 who would say BACK to that: are you ENTIRELY SURE…that THAT’S what this thing’s gonna be able to do? 


Among these philosophers is a guy named Noam Chomsky. We covered his book manufacturing consent on this podcast before…and for whatever its worth there are few philosophers if any who are alive today that will be remembered as being as prolific, AS influential as he’s been. I actually personally watch a LOT of interviews of Noam Chomsky I find his take on American Politics fascinating and in every interview I see with him lately…because he’s 95 years old at this point…every interviewer just has to ask a question like he’s already dead, like looking back on your life late Mr. Chomsky, what’s the one thing you wish you could take back of ALL the mistakes you made? What is the happiest moment you ever felt while you were still here with us my boy?


They ask all this stuff…but I see him as a guy that’s STILL doing relevant philosophical work to this day…his thoughts on ChatGPT and LLMs being only one PART of that. He co-wrote an article in the New York Times in March of this year where the title of the article was: The False Promise of ChatGPT. 


What WAS the false promise of ChatGPT…well what he and his coauthors are RESPONDING to is any variation of that mentality that we just TALKED about…where, from the article: 


“These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.”


Now WHY is this a false promise? Because to Noam Chomsky…the idea of AI as it currently exists SURPASSING or even MATCHING human intelligence…is science fiction. It’s called a LANGUAGE model, thats driven by artificial intelligence…but to him it has nothing to DO with human intelligence OR language in ANY capacity. To make his point…he often starts by making a distinction…he’d ask would you say ChatGPT’s an accomplishment in the field of engineering… or is it an accomplishment in the field of science. Because those are two, VERY different THINGS at their core. 


First of all: credit where credit’s due…large language models he says are no doubt useful for SOME things. Transcription, translation, he gives a half dozen examples of USEFUL things it may eventually prove to do. After all… great feats of ENGINEERING… are very USEFUL for some things people want to do…think of a bridge that allows people to cross a river for example. 


But great feats when it comes to conducting science…THOSE are in an entirely different league of their own. Science is not just trying to build something USEFUL for people…SCIENCE is trying to understand something MORE about the elements of the world that we live in. 


But that’s the thing…how exactly do we accomplish that? Meaning what is the process? How do we actually make breakthroughs that allow us to understand the universe better? Is it by analyzing mountains of data about how the world is and then probabilistically trying to predict based on what we already know what the next breakthrough is going to BE in the sciences? 


Because if THAT were the case…then somebody should just let ChatGPT run wild, give it a six pack of redbull and have it solve all the mysteries of the universe. The PROBLEM is, Noam Chomsky says…that’s NOT what human beings are DOING when they come up with scientific theories that lead to progress. 


From the article: “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”


What we want in other words WHEN we’re conducting science are NOT theories that are PROBABLE based on what we already know…sometimes the theories that get us a BETTER understanding of the universe are highly IMPROBABLE, even COUNTER INTUITIVE. The article uses the example of someone holding an apple and then it falling to the ground. Scientific question you could ASK is: why is the apple…falling to the ground. 


Well during the time of Aristotle…the reason given for why the apple falls to the earth is because the EARTH is the apple’s natural place. Solid answer. During the time of Newton it was because of an invisible force of gravity. During the time of Einstein it is because mass affects the curvature of space time. Now… if ChatGPT or any of these language models existed during the time of Aristotle…and they were TRAINED on the data that was available to the people of THAT time…in a system that is NOT designed to come up with NEW explanations…but instead one that just produces what the most PROBABLE word is to come next based on conversations it’s already seen between scientists in its TRAINING data…these models would NEVER predict something as IMPROBABLE… as the apple is falling because of the unseen curvature of a concept called space/time that nobodies gonna be talking about for thousands of years…these models would NEVER assume THAT’S what’s RESPONSIBLE for an apple falling towards earth. For THAT… we need people coming up with new explanations…for THAT to Noam Chomsky…we need ACTUAL intelligence. 


This is a classic example of what he calls undergeneration. And if you were to ask him…this is one of the PROBLEMS with ChatGPT…and what distinguishes Artificial Intelligence so far from actual HUMAN intelligence…it’s THIS…that they are ALWAYS prone to either undergeneration or overgeneration. 


These models either undergenerate… meaning they don’t generate all the responses they should or COULD… because their answers are based on what the most common answers were in their training data…or they overgenerate, and give responses that technically fit into a sentence grammatically…but don’t ACTUALLY make any sense at all because the algorithm has no REAL conception of physical reality and what’s logically coherent. 


As Noam Chomsky says you ask this kind of algorithm to map out the periodic table… and it will give you all the elements that exist because it’s SEEN people talk about elements before…but BECAUSE it has no REAL conception of the underlying laws of chemistry or physics…it’ll ALSO give you all the elements that DON’T exist…and even a bunch of elements that can’t POSSIBLY exist. It’ll tell you elements like unobtanium from science FICTION books it’s been trained on. 


It will DO this…because ALL that a language model is doing is trying to generate text that LOOKS like text that it’s seen before. It really HAS… NO idea what the MEANING is of ANYTHING that it’s saying. This IS syntax vs semantics by the way…this IS the chinese room all OVER again…and this is why someone like Chomsky is going to say that a large language model in the current form that we have them… is incapable of distinguishing the possible from the impossible. 


Here’s a quote about this VERY TOPIC…that’s NOT from the article: 


“This limitation reflects the fact that while these language models can generate creative and complex text, they don't actually "understand" the content in the way humans do. They don't formulate hypotheses, make novel scientific discoveries, or generate new theories. They simply predict the next most likely word or phrase based on the patterns they've learned from their training data.” And if you’re wondering if that’s just yet another biased quote from a hater like Noam Chomsky…that quote I just read…was actually from ChatGPT…I asked it to DEFEND itself…and it told me Chomsky’s right. 


Anyway this really IS the problem with large language models in their current form if we EVER want to say that they’re something STARTING to resemble AGI.


Intelligence…when it comes to what human beings are doing when they USE intelligence to come up with the TYPE of scientific theory that sheds some light on our understanding of the universe…that intelligence REQUIRES the ability to not only know what CAN be the case…but also what cannot POSSIBLY be the case. MORAL reasoning, which an AGI would be doing…that requires the ability to be able to distinguish between what OUGHT to be the case…but ALSO what ought to NEVER be the case.  And at least as it stands right now…that is NOT what these large language models are even close to doing. Chomsky calls what they’re actually doing if we wanted a more accurate description of the technology…is that its a kind of glorified autocomplete, like on your phone. Sophisticated, high tech, plagiarism… he called it once. 


Now this is FAR from the end of the conversation…this is just a single TAKE. ONE direction to go from here if you’re thinking like a philosopher… is to call into question the definitions that we’re using for the terms here. For example…why are we throwing around the word “intelligence” without being more specific? Like isn’t it important to examine what we MEAN when we say intelligence. Couldn’t MACHINES be capable of thinking and intelligence that just isn’t anything LIKE HUMAN intelligence? Another question is: does it even MATTER…if these things conform to some narrow definition of intelligence, as we currently think about it. 


Because one thing that CAN’T be understated… and this is ME talking not Noam Chomsky…EVEN IF…large language models are no where even CLOSE to becoming artificial general intelligence…SIMPLY at the level of technology that AI is currently AT…AI is STILL something that’s VERY dangerous if for NO other reason than the fact that people BELIEVE, artificial intelligence IS close to being AGI. 


In the world today with all the headlines you see on your phone about how the AI revolution is here among us…you have this sort of…three headed monster going on with the people that are READING those headlines. One of the heads is that tech companies need to secure funding, and it’s NOT ENOUGH these days in silicon valley to just come out with a product, no you gotta create a product that is literally changing reality as we know it, so TECH companies are generating false hype to get funding and MEDIA companies looking for clicks will GLADLY capitulate…the SECOND head of the monster is a type of FUTURISM… that has religiously captivated a certain percentage of the population where they think all of our problems are gonna be solved by some technojesus savior coming down from the clouds…and THEN there’s the last head of the monster which is just an overall Hollywood sentiment that some people seem to have that they just WANT this stuff to be true. They WANT to be living in the ERA where the AI revolution is happening. And that bias SHADES the way they see everything. 


So when they see a headline talking about how the singularity is near…when they have a conversation with chatGPT and they’re under the impression this thing is an all knowing ORACLE that’s SCOURING the internet for information and synthesizing it into genius insights for you…that misunderstanding of how the technology works…is dangerous. 


This could lead people to believe that this ISN’T, actually based on training data SELECTED by a handful of people at a company with agendas of their own. People could ask this thing for LIFE advice thinking it’s talking to a super intelligent being. Think of how tempting it would be to have this thing maybe make POLITICAL decisions for us…I mean, after all WITHIN this misunderstanding…this thing isn’t PRONE to the same biases that human BEINGS are. This thing isn’t LIMITED to a single brain or perspective. It can simulate the future… a TRILLION times from EVERY single perspective and come up with the best of all possible worlds. 


Again, IMAGINE what’s essentially a religion of people thinking that this Artificial Intelligence… is not just the best CANDIDATE we have to lead our society…it is actually better than the sum total of the intelligence of EVERY OTHER human being COMBINED. Imagine in a religious way trusting the decisions of our dear Artificial Intelligence leader. EVEN if the decisions its making don’t make sense to us…that’s just because we’re feeble humans…we just gotta have faith. 


But to bring this back to Noam Chomsky though…one of the biggest dangers of people misunderstanding what it IS that ChatGPT is doing…is that for every second that people spend talking to this thing worried about the fact that the singularity is just around the corner…that is one second that we are NOT spending, worrying about two ABSOLUTELY real existential threats that are facing humanity. The threat of nuclear war… and the threat of uncontrollable climate change. As he puts it at one point the fossil fuel companies and the banks are destroying the possibility for life on earth…and he’d ask: how long are we going to spend playing around with fancy toys, word generation technology before we get serious about addressing the things that genuinely, immanently have the ability to END human life as we know it. No doubt someone like Chomsky would think that getting these ideas out to as many people as possible would be a marginal step in the right direction. So that’s what your boy Stephen’s trying to do here, thanks for hanging out and listening to the ideas today. 


Now that said…again EVEN if we’re nowhere NEAR AGI…the RISKS that artificial intelligence poses to humanity, the EXPONENTIAL rate of improvement even just since November of last year, the challenges of regulating something changing so fast, there’s a LOT to talk about. Next episode we’re going to talk about some of this. Like we OFTEN do on this podcast…we’re going to talk next time…about some of the people who ARE impressed by ChatGPT, by Robotics, by Synthesized images and videos…we’re going to talk about what it would mean to CREATE an invasive species…and then what it might be like if we FOUND ourselves living among it. 


Thank you for listening. Talk to you next time. 


Previous
Previous

Episode #184 - Transcript

Next
Next

Episode #182 - Transcript