Scientists and technology industry experts have issued yet another warning regarding artificial intelligence. These leaders keep setting up alarms to warn humanity about the potential risks AI poses for human existence and the civilization.
Last Tuesday, May 30, hundreds of academics, researchers, scientists and other big names in the artificial intelligence field, including Sam Altman (OpenAI executive director) and Demis Hassabis (Google DeepMind executive director) have expressed their concern for the future of humans.
They have signed an open letter to the public, which only contains one phrase that reveals the possible dangers of this rapid technology advancement. The letter advocates for treating AI with the same priority as other situations. It is a call for regulating it before it is too late.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
These remarks are direct and should not be taking lightly. The leaders and academics in the technology field are literally warning that the AI revolution must be seen as seriously as nuclear war. They are begging for lawmakers and politician to raise awareness on the issue and start creating basic laws to tone down the imminent power these advancements have had.