CoopEval: Benchmarking Cooperation-Sustaining Mechanisms and LLM Agents in Social Dilemmas

Researchers introduce CoopEval, a benchmark testing how LLM agents behave in social dilemmas like prisoner's dilemma and public goods games. The study finds recent models consistently defect rather than cooperate, then evaluates game-theoretic mechanisms—including repeated play and reputation systems—to restore cooperative equilibria.
MentionsCoopEval · LLM agents
Read full story at arXiv cs.CL →(arxiv.org)
Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.