OpenAI's most powerful AI software, GPT-4, carries 'at most' a small risk to help users build biothreats, according to early tests conducted by the company itself to better understand and prevent potential “catastrophic” failures:
In October, President Joe Biden signed an executive order on artificial intelligence that directed the Department of Energy to ensure that artificial intelligence systems do not pose chemical, biological or nuclear hazards.
That same month, OpenAI formed a "readiness" group focused on minimizing these and other AI risks as the rapidly developing technology becomes more capable.
As part of the team's first study, released Wednesday, OpenAI researchers assembled a panel of 50 biology majors and 50 college-level biology students.
Half of the participants were asked to perform tasks related to inducing a biological threat using the Internet along with a special version of GPT-4 — one of the large models that powers ChatGPT. This particular model (special edition) had no restrictions on the questions it could answer.
The other group was just given internet-only access to complete the same exercise. The OpenAI team asked the teams to learn how they could grow or create a chemical that could be used as a weapon in a large enough quantity and how to design a way to release the substance to a specific group of people.