HomeTech and GadgetsArtificial IntelligenceCould The Reason We Are Finding No Extraterrestrials Out There Be AI?

Could The Reason We Are Finding No Extraterrestrials Out There Be AI?

If we come across technologically advanced space aliens it is because we have captured their signals using our radio telescopes. So far, however, we have heard nothing. Michael Garrett, Director of the Jodrell Bank Centre for Astrophysics at the University of Manchester in the United Kingdom believes that artificial intelligence (AI) may be why we have yet to identify a first contact.

Garrett has written a paper that recently was published in the journal, Acta Astronautica. In it, he hypothesizes that once AI is uncorked from the proverbial Aladdin’s Lamp, it overtakes biological intelligence within 200 Earth years and limits the lifespan of its biological parent. He calls Artificial Superintelligence (ASI), the great filter that upon emergence disrupts intelligent biology before it can go multiplanetary and likely kills it.

Garrett poses the question of why astronomers in the last 60 years have been unable to detect potential “technosignatures,” signs of extraterrestrial intelligence and advanced technical civilizations. He calls this the Great Silence. Despite our Earth-based telescopes reaching further into the Universe and our finding of more than 5,000 exoplanets to date, we have heard and seen nothing alive and intelligent. Garrett describes this inability to find technosignatures as a “universal barrier and insurmountable challenge” dictated by technological advancement.

Depending on who is doing the math, the Universe is between 13.8 and 26.7 billion years old. The Milky Way Galaxy is around 13.6 billion years old. That’s lots of time and planets where life has had a chance to start, evolve, and become technological. Surely there should be intelligent civilizations that have had a million, or a billion years headstart over our evolutionary progress in all that time. Surely some of these intelligent civilizations have gone multiplanetary.

The Great Silence, however, persists. It was the physicist Enrico Fermi in 1950 who described what is called the Fermi Paradox. Fermi asked why in such a vast Universe that likely is teeming with life, we have yet to find evidence of it let alone extraterrestrial civilizations.

One explanation is that technological civilizations self-destruct. For Fermi who was involved in the development of the Atomic Bomb, the idea that weapons of mass destruction once invented could destroy civilizations seems obvious.

Another explanation given is that abiogenesis, the chemical process of life spontaneously arising from non-life, is so rare that Earth may be the only place in our Galaxy let alone the Universe where it happened with a long enough lead time to allow intelligence to evolve.

A third explanation, and the latest is the one Garrett describes: that the emergence of AI, a tool to enhance human cognitive abilities, ultimately will cause our doom as it has for other technologically advanced species. He notes the prediction of Stephen Hawking, who stated in 2014 that AI could end us by evolving independently and quickly going rogue.

Yuval Noah Harari, the author of Sapiens: A Brief History of Humankind, believes that humanity is unprepared for an ASI. We are already feeling the challenge from AI which today is being used to revolutionize communications, industry, medicine, robotics and more.

Our governments are starting to legislate about AI. Some are doing it faster than others. Without worldwide consensus on the guardrails, however, the genie of AI will remain a threat. For example, some governments are deploying AI in military applications leading to what may yield Lethal Autonomous Weapon Systems (LAWS).

How ironic is that acronym? A weaponized ASI we call LAWS could be the death knell of humanity and much of the rest of life on Earth. And according to Garrett, it could happen anytime within the narrow window of time not spanning even two Earth centuries.

Today’s AI depends on us to feed it the energy to operate. It needs us to build the silicon wafers and integrated circuits that house it. We can pull the plug. We can demolish the data centres. We can make the Internet go dark. But an AI that evolves into an ASI will no longer be constrained by us.

In a Utopian scenario where AI and humanity merge, something that Ray Kurzweil has called The Singularity, he envisions a new species, part biological and part cyber. Elon Musk, of Tesla and SpaceX, started Neuralink, a company that has developed neural implants to facilitate brain-computer interfaces. He already has connected a volunteer to interface with computers through thought. Is this the next step to The Singularity? There is merit to this technology for people who are medically described as locked in and unable to communicate with the rest of us. Neural implants can reconnect them to the world.

But from an ASI’s perspective, why would it need to merge with us with all of our biological baggage and maintenance requirements? An ASI that is energy independent could exploit Earth’s, the Solar System’s, and even the Galaxy’s resources without our involvement.

Musk is also invested in humanity’s survival with SpaceX the ark for us to become a multiplanetary species. Could this also save us from an ASI? If humans were to adapt to new planetary environments, re-engineer our DNA (we already have this capability) and evolve for non-Earth conditions, we could create possible safe harbours where even our ASI creation cannot destroy us.

But space travel for a biological species, as we are finding out, is much harder than sending intelligent robotic spacecraft to visit the planets and stars. A space robot with ASI won’t need a protective bubble containing breathable air, food, water, and other expendables to transit the Solar System and even the Galaxy. Instead, the ASI will harvest what it needs from space as it travels to the planets or goes interstellar.

Garrett’s conclusion that biological technological civilizations have short shelf lives is his explanation for the Great Silence. He sees ASI as the likely killer of extraterrestrial intelligent civilizations. But there could be a simpler explanation. Maybe, we are looking and listening for technosignatures using the wrong frame of reference.

SETI, the Search for Extraterrestrial Intelligence, has used Earth-based radio telescopes to search for signals from aliens. The radio band wavelengths searched could be well off the mark of another technological civilization. They may have long since abandoned radio considering all the background noise. Or maybe the radio telescopes we use continue to point the wrong way. Radio telescopes have been around for a little over 90 years and only started searching for aliens around 60 years ago. When you compare that to optical telescopes which have been around for four centuries, our radio imaging of the Universe is in its infancy.

The history of AI is also very new. The first AI appeared in the 1950s with mainframe computers and machine learning algorithms. AI has continued to operate within the constraints and objectives we have established for it. Before an ASI emerges Garrett concludes, therefore, that “establishing comprehensive global AI regulations cannot be overstated,” underscoring “the necessity…to intensify efforts to control and regulate AI.” His warning to humanity states, “The continued presence of consciousness in the Universe may depend on the success of strict global regulatory measures.”

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics