HomeTech and GadgetsArtificial IntelligenceAre Large Language Model Generative AIs Sentient, Conscious or Thinking?

Are Large Language Model Generative AIs Sentient, Conscious or Thinking?

Welcome back Katie Brenneman, a regular contributor to 21st Century Tech Blog. Several weeks ago when ChatGPT entered the headlines I suggested to Katie that she consider writing about Large Language Modelling (LLM) and the technological and societal implications in terms of its capabilities. Were we witnessing the birth of consciousness in this new artificial intelligence (AI) discipline, or were we coming to terms with what defines our sentience?

By definition, sentience is about feelings and sensations and not thinking. Consciousness, on the other hand, is about our awareness of self and our place in the world around us. And thinking is about the ability to reason, consider a problem, come up with an idea or solution, or have an opinion.

So from what we know about ChatGPT in its various iterations, does it meet the definition of any of these terms? Is it sentient? Is it conscious? Is it thinking?

Here are Katie’s conscious thoughts on the subject.


Examining the Capabilities of AI

Unless you’ve been living under a rock for the past few months, chances are you’ve heard of the new AI software, ChatGPT. This technology is making waves in the tech industry and is enthralling millions with its ability to take simple, natural language questions and answer them in incredible detail.

ChatGPT is a generative AI which means it can produce text, images, and other types of responses to prompts. Generative AI learns from inputs of data to generate its output. LLM is a subset of generative AI.

The advent of the technology has the potential of upending entire industries. If an AI can produce works of art, transcribe information, write essays, take the bar exam and pass, and generate software code, what does it mean for humans in these careers or doing these jobs?

Breaking Down the Technology

For most of us without a degree in computer software engineering, the inner workings of AI and LLM systems are a mystery. Maybe this explanation can help.

An LLM uses a deep learning model built using a neural network. A neural network is a computer-based technology that mimics biological circuitry in living brains. LLMs are trained on massive quantities of unlabeled text using self-supervised learning strategies, which are a type of machine learning that doesn’t necessarily need to involve heavy human categorization of inputs. Ultimately, this allows the AI to perform well at a variety of different tasks and “memorize” a great deal of knowledge including patterns within natural language.

An LLM’s ability to understand natural language patterns opens many doors for the software to become a valuable tool with many companies interested in how the technology can be adapted for internet search impacting ranking factors and the calculation of search results.

A Clash of Titans

The titans referred to here are the technological giants of the industry and include Amazon, Google, Microsoft, Apple, and Meta. All are working with generative AI systems in the face of OpenAI’s recent release of ChatGPT. Many industry analysts and investors are speculating on which of these companies and their generativeAIs will come out on top.

The power and use of the technology are currently not constrained, but given the early results from ChatGPT, many working with AI are worried about where future advancements will take us. An open letter signed by more than 1,100 technology industry leaders has called for a temporary halt to development until safety guardrails and protocols are
put in place. After all, human-competitive AI software can have many serious and unexpected consequences, according to them.

But many industry leaders are encouraging the industry to continue to push the boundaries of what can be achieved with this technology. They see real potential benefits, the easing of workloads, and producing real positives for the human condition. For example, some believe a generative AI could be used to transcribe doctors’ patient charts or help with baseline diagnoses for some conditions. They argue that this could free up doctors to spend more time communicating face-to-face with patients.

When Does a Chatbot Become Sentient?

One of the questions associated with advanced generative AIs is when these systems become too life-like. Is there a point when they could become conscious, sentient or thinking? Have generative AIs creators considered the implications for humanity? Some in the industry already believe generative AI has crossed the bridge of consciousness or at least the appearance of it. Former Google engineer, Blake Lemoine, is one of these. After working with and conversing with Google’s generative AI LaMDA (Language Model for Dialogue Applications), he was utterly convinced the software was sentient and helped to connect it to a lawyer.

From a neurological perspective, there is an ongoing debate about what comprises sentience, consciousness and thinking. Determining these characteristics is at times hard even in humans. For example, if a person suffers from severe dementia do they qualify in these three categories? But as generative AI continues to undergo its evolution, at some point, we will need to determine if this technological creation of our species will fulfil all three states of mind that we currently claim as our own.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics