Is artificial intelligence the rationale we haven't contacted intelligent aliens yet?

Is artificial intelligence the rationale we haven't contacted intelligent aliens yet?
22 May 2024 J.W.H

Michael Garrett: Artificial intelligence (AI) has been developing at an astonishing pace over the previous few years. Some scientists are currently specializing in the event of artificial superintelligence (ASI) – a type of artificial intelligence that won’t only surpass human intelligence, but won’t be limited by the speed of human learning.

But what if this milestone isn't only a remarkable achievement? What if it also represents an enormous bottleneck in the event of all civilizations, so difficult that it thwarts their long-term survival?

This idea is the idea of a research article I recently published in Acta Astronautica. Could artificial intelligence be the “great filter” of the universe – a threshold so difficult to beat that it prevents most life from evolving into space-faring civilizations?

This is an idea that will explain why the seek for extraterrestrial intelligence (Seti) has not yet detected traces of advanced technological civilizations elsewhere within the galaxy.

The large filter hypothesis is the last word proposed solution to the Fermi Paradox. This raises the query of why, in a universe vast and sufficiently old to support billions of doubtless habitable planets, now we have not detected any signs of alien civilizations. The hypothesis suggests that there are insurmountable obstacles in civilization's evolutionary timeline that prevent them from developing into space-faring beings.

I consider the arrival of ASI could possibly be such a filter. The rapid development of artificial intelligence, potentially resulting in ASI, may intersect with a critical phase of civilization's development – the transition from a single-planetary to a multi-planetary species.

At this point, many civilizations could collapse, and artificial intelligence would advance much faster than our ability to manage it or sustainably explore and populate our solar system.

The challenge with artificial intelligence, and ASI specifically, is its autonomous, self-reinforcing and improving nature. It has the potential to expand its own capabilities at a speed that exceeds our own time-frame for evolution without artificial intelligence.

The potential for something to go unsuitable is gigantic and could lead on to the collapse of biological civilizations and artificial intelligence before they’ll develop into multi-planetary.

For example, if nations increasingly depend on and cede power to autonomous artificial intelligence systems that compete with one another, military capabilities could possibly be used to kill and destroy on an unprecedented scale. This could potentially result in the destruction of our entire civilization, including the AI ​​systems themselves.

I estimate that on this scenario the standard lifespan of a technological civilization could also be lower than 100 years. This is roughly the time between the flexibility to receive and transmit signals between stars (1960) and the estimated appearance of ASI (2040) on Earth. This is a disturbingly short result in comparison with the cosmic time scale of billions of years.

These estimates, when combined with optimistic versions of the Drake equation – which tries to estimate the variety of energetic, communicative extraterrestrial civilizations within the Milky Way – suggest that only a handful of intelligent civilizations exist at any given time. Moreover, like us, their relatively modest technological activities may make them quite difficult to detect.


This study isn’t only a warning of potential doom. This serves as a wake-up call for humanity to ascertain a solid regulatory framework that may guide the event of artificial intelligence, including military systems.

This isn't nearly stopping the sinister use of artificial intelligence on Earth; it's also about ensuring that the evolution of artificial intelligence is consistent with the long-term survival of our species.

This suggests that we’d like to devote more resources as quickly as possible to becoming a multi-planetary society – a goal that has lain dormant because the peak of Project Apollo but has recently been revitalized by progress made by private corporations.

As historian Yuval Noah Harari has noted, nothing in history has prepared us for the impact of introducing unwitting, superintelligent beings onto our planet. Recently, the implications of autonomous decision-making in AI have led to calls from top leaders in the sector for a moratorium on AI development until a responsible type of control and regulation is in place.

But even when every country agreed to follow strict rules and regulations, it might be difficult to stop rogue organizations.

The integration of autonomous artificial intelligence into military defense systems must raise particular concerns. There is already evidence that folks will voluntarily hand over significant power in favor of increasingly capable systems because they’ll give you the option to perform useful tasks much faster and more efficiently without human intervention.

This is why governments are reluctant to manage on this area, given the strategic advantages that artificial intelligence offers, as recently and devastatingly demonstrated in Gaza.

This implies that we’re already dangerously approaching the abyss where autonomous weapons operate outside ethical boundaries and circumvent international law.

In such a world, ceding power to AI systems to achieve a tactical advantage could inadvertently set off a series of rapidly escalating, highly disruptive events. In the blink of an eye fixed, the collective intelligence of our planet could possibly be destroyed.

Humanity is at a pivotal point in its technological trajectory. Our actions today may determine whether we develop into a sustainable interstellar civilization or succumb to the challenges of our own creations.

Using Seti as a lens through which we will examine our future development adds a brand new dimension to the discussion in regards to the way forward for artificial intelligence. It is as much as all of us whether, when reaching for the celebrities, we do it not as a warning to other civilizations, but as a beacon of hope – a species that has learned to develop alongside artificial intelligence.

Michael Garrett, Sir Bernard Lovell, Head of Astrophysics and Director of the Jodrell Bank Center for Astrophysics on the University of Manchester

This article has been republished from Conversation under Creative Commons license. Read original article.

Don't miss necessary stories, keep following us Telegram more science and the unexplained!

Image Source:

  • J.W.H

    About John:

    John Williams is a Reincarnationist paranormal Intuitive freelance writer...he is living proof of reincarnation existence, through his personal exploration, he has confirmed its authenticity through visits to the very lands where these events transpired.

    Through guided meditation/s using hemi-sync technology he has managed to recollect 3 previous lives to his own, that go back to the Mid to Late 19th century.

    JWH - "You are the GODS! - Inclusion of the Eternal Light of Love and you shall never die”.

    “Death is Just the Beginning of Life”