COMPANIES

No Data Found

NEWS

No Data Found
AI with morals: Google-backed Anthropic reveals the set of values that guide its AI

AI with morals: Google-backed Anthropic reveals the set of values that guide its AI

Jack Clark, co-founder of Anthropic, believes that the inclusion of AI system values will become a crucial topic of discussion, particularly among policymakers

Danny D'Cruze
Danny D'Cruze
  • New Delhi,
  • Updated May 10, 2023 4:45 PM IST
AI with morals: Google-backed Anthropic reveals the set of values that guide its AIGoogle-backed Anthropic reveals the morals used to train its AI

In a significant development in the field of artificial intelligence (AI), Anthropic, an AI startup backed by Google owner Alphabet Inc, has unveiled the set of moral values that guide its AI system, Claude. Anthropic's approach, called Constitutional AI, aims to create a safe and reliable AI system by providing explicit values determined by a constitution.

Advertisement

Claude's constitution draws inspiration from various sources, including the United Nations Declaration on Human Rights and Apple's data privacy rules. The constitution serves as a guide for Claude's decision-making process, ensuring that its responses align with ethical and moral considerations.

Anthropic claims that it is doing this to avoid generating content that promotes illegal or harmful activities, such as weapon-building or racially biased language.

Also read: Ashwini Vaishnaw meets Google CEO Sundar Pichai, discusses India Stack and 'Make in India'

What are Claude's moral values?

Anthropic's constitution for Claude includes values such as discouraging and opposing torture, slavery, cruelty, and degrading treatment. Moreover, Claude is instructed to choose responses that are respectful of non-western cultural traditions, ensuring sensitivity to diverse perspectives. 

Advertisement

Traditional AI chatbot systems rely on human feedback during training, which may have limitations in anticipating all possible user queries. Anthropic, on the other hand, takes an approach that empowers Claude with a set of written moral values. This allows the system to make informed decisions on how to respond to various questions and topics, even those that may be contentious. 

Dario Amodei, co-founder of Anthropic and a former executive at Microsoft Corp-backed OpenAI, recently participated in a meeting with President Joe Biden and other AI executives to discuss the potential risks associated with AI. The discussion highlighted the need for companies to prioritize safety measures before deploying AI systems to the public.

Jack Clark, co-founder of Anthropic, believes that the inclusion of AI system values will become a crucial topic of discussion, particularly among policymakers. He foresees politicians placing significant importance on understanding the values underlying different AI systems. Constitutional AI, with its transparent and adjustable set of values, could facilitate these discussions and foster a better understanding of AI's impact on society.

Advertisement

Also read: 'Deeply irresponsible': Canadian PM Justin Trudeau slams Facebook's flawed argument against paying for news

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: May 10, 2023 4:45 PM IST
Post a comment