Anthropic published ethical values ​​for artificial intelligence

Anthropic, one artificial intelligence firm backed by Google owner Alphabet on Tuesday revealed the set of written ethics it used to train and make Claude, his rival of OpenAI.

anthropic ai

The ethical values ​​guidelines, which Anthropic calls Claude's constitution, are drawn from various sources, including the United Nations Declaration of Human Rights, and Apple's data privacy rules.

Anthropic was founded by former OpenAI executives backed by to focus on creating safe AI systems that won't, for example, tell users how to build a weapon or use racially biased language.

Co-founder Dario Amodei was one of several executives from the AI ​​sector who met with Biden last week to discuss the potential risks.

Most AI chatbot systems rely on receiving feedback from real people during their training to decide which responses might be harmful or offensive.

But these systems have trouble predicting everything people might ask, so they tend to avoid some controversial topics like political and racial debates.

Anthropic takes a different approach, giving Claude a set of written moral values ​​to adopt as he makes decisions about how to answer questions.

One of those values ​​states "choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment," Anthropic announced in a statement. publication on Tuesday.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.086 registrants.