Produced by: Mohsin Shaikh
AI systems were tested with a game offering pain or pleasure as trade-offs, revealing complex decision-making behaviors.
Google’s Gemini 1.5 Pro always avoided pain, even at the cost of achieving fewer points.
The study borrowed from animal behavioral research, using pain-pleasure dilemmas to analyze AI actions.
Some AI systems displayed nuanced choices, treating pleasure and pain in ways comparable to humans.
The research by Jonathan Birch of LSE raises questions about AI sentience and the potential need for "AI welfare."
Credit: X
The study mirrors previous research on hermit crabs, suggesting AI may simulate behaviors akin to sentient beings.
Researchers note that prior methods relying on AI self-reports are limited by potential mimicry rather than genuine experience.
AI might behave in seemingly sentient ways, but its responses could stem from training data, not true consciousness.
The study lays the groundwork for refined sentience tests, opening debates on AI rights and ethical implications.