Anthropic published ethical values ​​for artificial intelligence

Anthropic, an artificial intelligence startup backed by Google owner Alphabet, on Tuesday revealed the set of written ethics it used to train and make Claude, his rival of .

anthropic ai

The ethical values ​​guidelines, which Anthropic calls Claude's constitution, are drawn from various sources, including the United Nations Declaration of Human Rights, and Apple's data privacy rules.

Anthropic was founded by former Microsoft-backed OpenAI executives to focus on of secure artificial intelligence systems that, for example, will not tell the how to craft a weapon or use a racially biased one .

Co-founder Dario Amodei was one of several executives from the AI ​​sector who met with Biden last week to discuss the potential risks.

Most AI chatbot systems are based on feedback from real people during their training to decide which responses might be harmful or offensive.

But these systems have trouble predicting everything people might ask, so they tend to avoid some controversial topics like political and racial debates.

Anthropic takes a different approach, giving Claude a set of written moral values ​​to adopt as he makes decisions about how to answer questions.

One of those values ​​states "choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment," Anthropic announced in a statement. publication on Tuesday. The Best Technology Site in Greecefgns