The ethical values guidelines, which Anthropic calls Claude's constitution, are drawn from various sources, including the United Nations Declaration of Human Rights, and Apple's data privacy rules.
Anthropic was founded by former Microsoft-backed OpenAI executives to focus on creation of secure artificial intelligence systems that, for example, will not tell the users how to craft a weapon or use a racially biased one language.
Co-founder Dario Amodei was one of several executives from the AI sector who met with Biden last week to discuss the potential risks.
Most AI chatbot systems are based on λήψη feedback from real people during their training to decide which responses might be harmful or offensive.
But these systems have trouble predicting everything people might ask, so they tend to avoid some controversial topics like political and racial debates.
Anthropic takes a different approach, giving Claude a set of written moral values to adopt as he makes decisions about how to answer questions.
One of those values states "choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment," Anthropic announced in a statement. publication on Tuesday.