Anthropic published ethical values ​​for artificial intelligence

Anthropic, an artificial intelligence startup backed by Google owner Alphabet, on Tuesday revealed the set of written ethics it used to train and make Claude, her ChatGPT rival .

anthropic ai

The guidelines for ethical values, which Anthropic calls Claude's constitution, are drawn from various sources, including the United Nations Declaration of Human Rights, and privacy rules Apple.

Anthropic was founded by former Microsoft-backed OpenAI executives to focus on of secure artificial intelligence systems that, for example, will not tell the how to build a weapon or use racially biased language.

Co-founder Dario Amodei was one of several executives from the AI ​​sector who met with Biden last week to discuss the potential risks.

Most AI chatbot systems rely on receiving feedback from real people during their training to decide which responses might be harmful or offensive.

But these systems have trouble predicting everything people might ask, so they tend to avoid some controversial topics like political and racial debates.

Anthropic takes a different approach, giving Claude a set of written moral values ​​to adopt as he makes decisions about how to answer questions.

One of those values ​​states "choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment," Anthropic announced in a statement. publication on Tuesday.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.100 registrants.