The security of generative artificial intelligence, including tools like ChatGPT, will have their systems tested in the most aggressive way possible: the hacking of their platforms by a hackers army in Las Vegas.
From Thursday, August 10th to August 13, thousands of hackers will be targeting the OpenAI artificial intelligence including ChatGPT and DALL-E. In addition, other AIs owned by Google, NVIDIA, Anthropic, Hugging Face, and Stability will also have their systems hacked. However, these men will not be performing the hacking actions with ill intentions, as they have been authorized by the companies to find security flaws and report them to their creators.
These cyber-security testing activities will be conducted at the Def Con 2023. This event is the world’s most important hacker conference. In this place, the”Red Team” will attack generative artificial intelligence platforms and attempt to exploit their weaknesses. The team will be performing a simulated hacker attack with a group of security experts, all with the consent of the software under attack to test security matters.
The event, which also has the backing of the White House Office of Science and Technology Policy, aims to push these technologies to the limit and promote responsible use of artificial intelligence. However, it still remains a challenge to make AI systems safe against informatic attacks, and the possibility to prevent these actions is still being studied. There is still a lot to learn and develop to fully have protective laws regarding artificial intelligence programs like ChatGPT.