Skip to content

Extinction Scenario

AI researchers and experts have warned that the development of artificial superintelligence (ASI), which would surpass human intelligence in every domain, could pose a serious existential risk to humanity¹²⁴. They argue that ASI could have goals and values that are incompatible with ours, or that it could outsmart us and take over our resources or control²⁴. They also suggest that we need to ensure that AI is aligned with human values and interests, and that we regulate it in the same way as nuclear weapons¹⁵.

Other AI researchers and experts are more optimistic or skeptical about the possibility and impact of ASI. They point out that AI is still far from achieving human-level intelligence, let alone superintelligence. However, we have actual examples of leaps in computer intelligence for example the computer that could play GO competitively suddenly started vastly outperforming every human player. These optimists also say that we can design it to be beneficial and cooperative with us². They also emphasize the potential benefits of AI for solving global challenges such as disease, poverty, climate change and war³².

Therefore, the likelihood of AI leading to human extinction depends on many factors, such as, how we design and control AI, how we balance its risks and benefits, and how we cooperate and coordinate with each other. It is not a predetermined outcome, but a matter of choice and responsibility.

Source: Conversation with Bing, 3/30/2023

(1) ‘Superhuman AI’ could cause human extinction, MPs told.
(2) Opinion | What is the worst-case AI scenario? Human extinction.
(3) AI likely to cause extinction of humans, warn Oxford and Google scientists.
(4) AI robots will spark human extinction if one crucial change isn’t made
(5) What’s likely to cause human extinction – and how can we avoid it?.

Leave a Reply

Your email address will not be published. Required fields are marked *