By Dr. Steven Wyre  |  01/05/2024


intelligence and philosophy

 

We’ve all heard it – artificial intelligence (AI) will take over the world, and robots will kill us to save the planet. At least, that is what science fiction writers have been saying since Czech playwright Karel Čapek gave us “Rossum’s Universal Robots” and American writer Isaac Asimov formulated his “Three Laws of Robotics” to ensure protection from our AI creations.

Since the unveiling of AI tools such as ChatGPT, the fretting over AI is in full force. At a recent Yale CEO Summit, some 42% of 119 CEOs expressed the fear that AI could “potentially destroy humanity in 10 years” or less.

Much like the many failed predictions on the first Earth Day, I sense the future will not be the doomsday prophets predict. Part of that may be because many computer science people, just like the Earth Day scientists, will work on solutions.

It will be interesting to see just how things pan out on the 50th anniversary of ChatGPT’s launch. A positive outlook may be facilitated, in part, by rethinking what AI actually is in computer science.

 

Maybe AI Is a Misnomer

It could be that AI is a misnomer because computers lack true conscious “intelligence” (aka “understanding”). To answer the questions of what AI is and what we should do about it, it’s helpful to consider the famous thought experiment, the Chinese Room Argument. This philosophical experiment was intended to demonstrate the difference between semantics and syntax.

 

The Chinese Room Argument

U.S. philosopher John Searle first developed his Chinese Room Argument in 1980 as an attempt to address whether computers would ever wake up and become conscious of their own existence. Part of Searle’s argument was to contend that, even if a computer could pass the Turing Test created by computer scientist Alan Turing, that would not be enough to bestow conscious intelligence on a machine. Whether the Turing Test was sufficient for making such a distinction is still debated among philosophers and computer scientists.

Published first in the paper “Minds, Brains, and Programs,” the notion of the Chinese Room Argument is rather simple. Searle imagines being in a room with a program in English that directs him to respond in a certain way to Chinese symbols slipped to him on pieces of paper. The response is then passed back out to people who, eventually, assume the person in the room knows Chinese.

At the core, this experiment differentiates between knowing syntax and understanding semantics. Knowing syntax does not equate to understanding semantics, and few would say that Searle knows Chinese. Because a computer can manipulate binary code to successfully communicate with humans, that does not mean that it understands the meaning of the language being used for communication.

In a wide-ranging article, researcher David Cole claims the Chinese Room Argument was only aimed at Strong AI, the claim “that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose behavior they mimic.”

In other words, if a computer program can mimic thought and intelligence, then it really is capable of thought and intelligence. This way of thinking seems to be an undercurrent in today’s philosophy that because AI tools like Google’s Bard and Open AI’s ChatGPT can mimic human intelligence, they must be in some way intelligent.

However, the whole Chinese Room Argument must be considered in its historical context. Searle created his thought experiment in 1980, a time when existing computers were getting bigger and the idea of the mind or brain as a computer was well entrenched. It was also a time when many philosophers were debating the merits of a Computational Theory of Mind.

Searle revised his argument when Scientific American took on the philosophical issues of whether the brain is like a computer and whether a machine could think in its January 1990 issue. Searle’s thoughts in the Scientific American article were juxtaposed with Canadian neuroscience philosophers Paul and Patricia Churchland, who took a somewhat opposite stand regarding artificial intelligence and whether a machine could think like the human mind.

In the Scientific American article, the Churchlands proffer a counterexample to traditional thought experiments. They lay out the basics of neural processing and how parallel systems in the brain function. However, the Churchlands did a better job of convincing Scientific American readers how impossible the task seems in creating a computer that can truly replicate what the human brain does.

All things considered, asking philosophical questions such as what will happen when the human mind is truly replicated by computer programs is purely academic. Until an example of a computer's self-awareness exists or until we fully understand the mechanics of consciousness, it does not appear that an AI tool will possess human cognition any time soon.

 

Can We Simulate a Human Brain with Computer Programs?

The folks at the Swiss Federal Institute of Technology Lausanne started an organization, the Blue Brain Project, that has been working to replicate a mouse brain since 2005. Talking about data organization, they wrote “Neuroscience is a very big, Big Data challenge. The human brain for example uses over 20,000 genes, more than 100,000 different types of proteins, more than a trillion molecules in a single cell, nearly 100 billion neurons, up to 1,000 trillion synapses and over 800 different brain regions.”

One can reasonably ask what a mouse thinks and if there is anything one can extrapolate from this knowledge to understand the conscious experience that humans have. If those Swiss researchers ever succeed, maybe we’ll have more answers.

I think that it’s a safe claim that creating a perfect computer program that can create a self-aware replica of a person's brain in the physical world will happen far into the future, if ever.

 

Large Language Models

Where do Large Language Models (LLMs) factor into this quest to create AI software capable of human-like thought? The process is complex.

According to journalist Timothy B. Lee and cognitive scientist Sean Trott, “no one on Earth fully understands the inner workings of LLMs.” To simplify it, a computer’s running program assembles “word vectors.” Through many layers of “vectors,” the computer then works out what the most likely next word should be in a sentence, similar to the auto-complete function that search engines have when someone is researching a topic in a search box.

According to Lee and Trott, an AI software tool such as GPT-3 “uses word vectors with 12,288 dimensions” and works through a model with 96 layers. The point here is that you can argue whether this is just processing surface statistics or whether some ability exists in the computer to create some world model (a model based on the ability to “learn large amounts of sensorimotor data through interaction in the environment,” according to Science Direct).

Essentially, when one computer reads “B-o-b,” the arrangement of words preceding the letters helps the computer determine whether the letters that follow should be “tail,” “and weave,” “Barker,” “cat,” or something else entirely. That decision is based on context and statistics.

Even if we allow a world model, is that evidence of human cognition? In April 2023, Nature issued a short report, “What’s the next word in large language models?” While reporting on the Open Letter from the Future of Life Institute promoting a six-month pause on giant AI, the report also shared the position of some Microsoft researchers that they could see “sparks of artificial general intelligence.”

That position was quickly criticized. Nature also reported on a conference that took place in March 2023 to debate whether LLMs need “sensory grounding” for human-like “understanding.” The panel of six experts was evenly split on the topic.

 

The Bottom Line on AI Consciousness and Language

Even if we can allow the possibility that some complex computer running an LLM can develop a world model, it may come down to the conclusion Searle offered in the 1990 Scientific American article: computer simulation does not equate to a human brain's duplication and thinking machines.

Just like detractors of the original Chinese Room Argument were willing to accept that conscious understanding (intelligence) can exist in the absence of semantics, some are willing to accept that conscious intelligence can exist in a machine.

Maybe there is a problem here on two fronts; both involve language. Despite all advances in neuroscience and psychology, we still do not have a consensus on a definition of consciousness. When it comes to intelligence, the diversity of definitions may be even more expansive.

Because of this diversity, some people go as far as to argue that even plants possess consciousness. Research on something called the Default Mode Network makes it much easier to say that, whatever consciousness or intelligence might be, it keeps getting more complicated.

Even there, we can debate whether consciousness can be equated with intelligence and how that relationship might work. Maybe what we need is a new vocabulary and the willingness to accept some ambiguity.

Can we accept that human intelligence is not the same as “horse intelligence,” “cat intelligence,” or even “plant intelligence”? None of these forms of general intelligence may be seen as artificial; they just are what they are. So why would a computer be any different?

In the same way we see well-educated adults as more intelligent than children at the kind of reasoning human brains do, perhaps we should view more advanced computers as more intelligent at the kind of reasoning level computers have? Simulation cannot be seen as identical to duplication, but with the right vocabulary, what is happening with computers today is an example of neither.

 

American Public University’s Philosophy Degree

For students interested in the connection between artificial intelligence and philosophy, the University offers a B.A. in philosophy. The topic of AI and human consciousness are explored in Analytic Philosophy (PHIL417) where both Searle’s argument and a counterargument by Paul and Patricia Churchland are evaluated.


About the Author

Dr. Steve Wyre received his B.A. and M.A. in philosophy from the University of Oklahoma and his Ed.D. from the University of Phoenix. He has been teaching various ground-based philosophy courses since 2000 and online since 2003. Steve has also served as a subject matter expert (SME) for courses in ancient philosophy, ethics, logic and several other areas.

Next Steps

Courses Start Monthly
Next Courses Start Jun 3
Register By May 31
Man working on computer