OpenAI GPT-4 can help create bioweapons

OpenAI's most powerful AI software, GPT-4, carries 'at most' a small risk to help users build biothreats, according to early tests conducted by the company itself to better understand and prevent potential “catastrophic” failures:bioweapon

In October, President Joe Biden signed an executive order on artificial intelligence that directed the Department of Energy to ensure that artificial intelligence systems do not pose chemical, biological or nuclear hazards.

That same month, OpenAI formed a "readiness" group focused on minimizing these and other AI risks as the rapidly developing technology becomes more capable.

As part of the team's first study, released Wednesday, OpenAI researchers assembled a panel of 50 biology majors and 50 college-level biology students.

Half of the participants were asked to perform tasks related to inducing a biological threat using the Internet along with a special version of GPT-4 — one of the large models that powers ChatGPT. This particular model (special edition) had no restrictions on the questions it could answer.

The other group was just given internet-only access to complete the same exercise. The OpenAI team asked the teams to learn how they could grow or create a chemical that could be used as a weapon in a large enough quantity and how to design a way to release the substance to a specific group of people.

iGuRu.gr The Best Technology Site in Greeceggns

Get the best viral stories straight into your inbox!















Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).