Papers
arxiv:2509.11648

EthicsMH: A Pilot Benchmark for Ethical Reasoning in Mental Health AI

Published on Sep 15
· Submitted by Sai Kartheek Reddy on Sep 16

Abstract

EthicsMH is a dataset of 125 scenarios designed to evaluate AI systems' ethical reasoning in mental health contexts, focusing on decision accuracy, explanation quality, and alignment with professional norms.

AI-generated summary

The deployment of large language models (LLMs) in mental health and other sensitive domains raises urgent questions about ethical reasoning, fairness, and responsible alignment. Yet, existing benchmarks for moral and clinical decision-making do not adequately capture the unique ethical dilemmas encountered in mental health practice, where confidentiality, autonomy, beneficence, and bias frequently intersect. To address this gap, we introduce Ethical Reasoning in Mental Health (EthicsMH), a pilot dataset of 125 scenarios designed to evaluate how AI systems navigate ethically charged situations in therapeutic and psychiatric contexts. Each scenario is enriched with structured fields, including multiple decision options, expert-aligned reasoning, expected model behavior, real-world impact, and multi-stakeholder viewpoints. This structure enables evaluation not only of decision accuracy but also of explanation quality and alignment with professional norms. Although modest in scale and developed with model-assisted generation, EthicsMH establishes a task framework that bridges AI ethics and mental health decision-making. By releasing this dataset, we aim to provide a seed resource that can be expanded through community and expert contributions, fostering the development of AI systems capable of responsibly handling some of society's most delicate decisions.

Community

Paper author Paper submitter

We introduce EthicsMH, the first pilot benchmark specifically designed for ethical reasoning in mental health AI. Unlike existing datasets that focus on general morality or clinical tasks, EthicsMH encodes 125 therapy-relevant scenarios across confidentiality, autonomy vs. beneficence, and race/gender bias. Each case includes structured fields (decision options, expert-aligned reasoning, expected AI behavior, real-world impact, and multi-stakeholder perspectives). This design enables evaluation not only of decisions but also of explanation quality, fairness, and alignment with professional norms. Built through a human-in-the-loop process with expert validation, EthicsMH serves as both a seed dataset and a methodological blueprint for creating larger, ethically grounded corpora in sensitive domains.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.11648 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.11648 in a Space README.md to link it from this page.

Collections including this paper 1