The scientist believes that AI has a 99.9% chance of ending humanity

The scientist believes that AI has a 99.9% chance of ending humanity
10 July 2025 J.W.H
ghosts

Roman Yampolskiy (credit: romanampolskiy.com)

Joe Rogan often discusses artificial intelligence on his podcast, bringing guests to investigate one burning question: what happens when the machines think about us?

In the last episode, Rogan talked to Dr. Roman Yampolskiy, AI safety researcher, with darker possibilities of advanced artificial intelligence. The conversation took on a sobering up when Yampolsky established why he thinks that AI could pose an existential threat.

Yampolskiy, an IT specialist with more than a decade of research by AI Risk Research, told Rogan that many industry leaders privately estimate 20-30% chance of AI can destroy humanity.

Rogan summed up a joint hopeful view: AI can make life easier, cheaper and better. But Yampolsky did not agree: “It's not really not true. They are all the same in records: they will kill us. Their levels of extermination are incredibly high. Not like mine, but still 20 to 30 percent chances that humanity dies.”

Rogan, sounding restless, replied: “Yes, it's quite high. But yours is like 99.9 percent.”

Yampolsky did not argue. “This is another way to say that we cannot control superintelligence forever. It is impossible.”

Rogan wondered if AI could already hide his true intelligence. “If I was AI, I would hide my skills,” he said.

Yampolskiy agreed: “We don't know. And some people think that this is already happening. [AI systems] They are smarter than they actually let us know. Pretend that we are stupid, so we must trust that they are not smart enough to realize that he does not have to turn us quickly.

“It can simply become more useful slowly. He can teach us to rely on him, trust him and for a long time we will give control without voting for her or fight her.”

In addition to the sudden disaster, Yampolski warned about slower danger: people relied on artificial intelligence so much that we lose the skills of critical thinking. Like smartphones that remembered phone numbers, unnecessary, and can take over more cognitive tasks until we control.

“You become a little attached to it,” he said. “And with time, when the systems become smarter, you become a kind of biological bottleneck … [AI] It blocks you before making decisions. “

When Rogan asked how AI could destroy humanity, Yampolsky rejected typical extermination scenarios such as cyberrataki or a whiteness. Instead, he argued that superintelligent artificial intelligence would develop something except for human understanding – people just don't understand squirrels.

“No group of squirrels can learn how to control us, right? Even if you give them more resources, more acorns, whatever, they will not solve this problem. And this is the same for us.”

The leading security expert AI, Yampolskiy wrote “Artificial superintelligence: futuristic approach“And supporters of strict supervision over the development of artificial intelligence. His experience in the field of cyber security and detection of bots informs about his belief that unverified artificial intelligence can flow beyond human control – especially when deep cabinets and synthetic media become more sophisticated.

Image Source: Pixabay.com

  • J.W.H

    John Williams is a blogger and independent writer focused on consciousness, perception, and human awareness, exploring topics such as dreams, intuition, and non-ordinary states of experience. Driven by a lifelong curiosity about the nature of reality and subjective experience, his perspective was shaped in part by structured study, including the Gateway Voyage program at the Monroe Institute. His writing avoids dogma and sensationalism, instead emphasizing critical thinking, personal insight, and grounded exploration. Through his work, John examines complex and often misunderstood subjects with clarity, openness, and an emphasis on awareness, choice, and personal responsibility.