AI now considered an "existential risk" to the planet and humankind as atomic scientists reset Doomsday Clock at 90 seconds to midnight
Atomic scientists recently
updated the Doomsday Clock to reflect the imminent danger facing the planet.
On Jan. 23, they reset it at 90 seconds to midnight for the second year in a row, matching the closest to the hour it has ever been. Midnight on the Doomsday Clock represents an "apocalyptic" state. This means the 90-second mark is classified as "a moment of historic danger."
The Doomsday Clock is a design that
warns the public about how close humanity is to destroying the world with dangerous technologies. As a metaphor, the Doomsday Clock is a reminder of the many threats humanity must address if people are to survive on the planet.
It was created in 1947. At the time, the greatest danger to humanity came from nuclear weapons.
The Bulletin of the Atomic Scientists (Bulletin) considered possible catastrophic disruptions in its hand-setting deliberations for the first time in 2007. The Bulletin is a non-profit organization founded in 1945 by a team of scientists, including Albert Einstein and J. Robert Oppenheimer.
Hyman Goldsmith, the Bulletin's co-editor, asked artist Martyl Langsdorf to create a design for the cover of the June 1947 edition of the Bulletin, with the first issue published as a magazine rather than a newsletter.
Martyl was married to Alexander Langsdorf, a physicist who worked on the Manhattan Project (an American-led effort from 1942-45 that produced the first Atomic bombs) at the
University of Chicago.
At first, Martyl considered using the symbol for uranium. But as she listened to the scientists who had worked on the nuclear bomb, she felt their urgency as they debated "the consequences of the new technology and their responsibility to inform the public."
Martyl drew a clock to suggest that humanity didn’t have much time left to get atomic weapons under control. Michael Bierut, a graphic designer, reimagined the iconic image in 2007.
The Doomsday Clock is set based on "existential risks" to the planet and humankind, which include nuclear threats, climate change and disruptive technologies.
AI was added as a new concern after the many technological developments in this field in recent years. The Bulletin warned that
AI is the "most significant development" in the disruptive technology space.
According to the scientists, there is no denying that "
AI is a paradigmatic disruptive technology. Any physical threat posed by AI must be enabled by a link to devices that can change the state of the physical world."
The Bulletin also mentioned the corruption of information enabled by AI as a "potentially important factor in preventing the world from dealing with other urgent threats," such as pandemics and nuclear risks.
AI is also increasingly being used by the military and information operations, particularly for intelligence, surveillance, reconnaissance, simulation and training. (Related:
DeepLearning.AI founder warns against the dangers of AI during annual meeting of globalist WEF.)
Researchers have warned that lethal autonomous weapons are a great concern, especially because they can identify and destroy targets without human intervention. Decisions to put AI in control of important physical systems, such as nuclear weapons, could potentially pose a direct existential threat to humanity.
Many countries are recognizing various concerns about AI
The paper also highlighted that several countries have recognized the concerns about AI and have started to take steps to regulate and minimize the harmful impact of the technology.
In August 2023, China became the first country to issue effective regulations for AI.
In December 2023, the European Union passed its provisional regulations on AI, which member states will then choose to adopt.
However, the U.S., which has emerged as the center of AI development over the last year, has yet to pass any official law. In October 2023, President Joe Biden issued an executive order establishing six new
AI safety and security standards.
The Bulletin warned that AI and other information technologies can be abused to allow authoritarian regimes to monitor citizens' activities and liberties.
Additionally, Experts have raised concerns over AI's ability to influence political elections via fake news and information manipulation ahead of a major global election season this 2024.
The Bulletin concluded that, despite their profound disagreements, the first step to addressing AI and various threats involves three of the world's leading powers: the U.S., China and Russia. These countries must start by having a serious dialogue about these identified threats. The U.S., China and Russia can still pull the world back from the brink of catastrophe. They should do so "with clarity and courage, and without delay," warned researchers at the Bulletin.
Visit
FutureScienceNews.com for more news about AI technology and how it can harm humanity.
Watch the video below as
Elon Musk talks about the dangers of AI at the World Government Summit 2023.
This video is from the
jonastheprophet channel on Brighteon.com.
More related stories:
AI chatbot tries to get British anti-terrorism advisor TO JOIN ISIS.
Study: ChatGTP4 can be programmed to LIE and DECEIVE.
U.S., Canadian AI companies COLLABORATE with Chinese experts to shape international AI policy.
Sources include:
CoinTelegraph.com
TheBulletin.org 1
TheBulletin.org 2
Brighteon.com