DeMeVa at LeWiDi-2025: Modeling Perspectives with In-Context Learning and Label Distribution Learning
Abstract
DeMeVa explores in-context learning and label distribution learning for predicting annotator-specific annotations and generating soft labels, demonstrating competitive performance and potential for further research.
This system paper presents the DeMeVa team's approaches to the third edition of the Learning with Disagreements shared task (LeWiDi 2025; Leonardelli et al., 2025). We explore two directions: in-context learning (ICL) with large language models, where we compare example sampling strategies; and label distribution learning (LDL) methods with RoBERTa (Liu et al., 2019b), where we evaluate several fine-tuning methods. Our contributions are twofold: (1) we show that ICL can effectively predict annotator-specific annotations (perspectivist annotations), and that aggregating these predictions into soft labels yields competitive performance; and (2) we argue that LDL methods are promising for soft label predictions and merit further exploration by the perspectivist community.
Community
Check out our DeMeVa team' approaches to LeWiDi-2025, the shared-task at this year's NLPespectivies workshop at EMNLP. We ranked 2nd overall with in-context learning, and also exlpored fine-tuning methods inspired by label distribution learning. Our analysis shows that LLMs can effectively learn and follow individual annotators' annotation patterns, and the impact of demonstration selection strategies is closely tied to the label structure (e.g., binary vs. Likert-scale).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LPI-RIT at LeWiDi-2025: Improving Distributional Predictions via Metadata and Loss Reweighting with DisCo (2025)
- Modeling Annotator Disagreement with Demographic-Aware Experts and Synthetic Perspectives (2025)
- The Impact of Annotator Personas on LLM Behavior Across the Perspectivism Spectrum (2025)
- Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning (2025)
- Improving in-context learning with a better scoring function (2025)
- Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study (2025)
- Will Annotators Disagree? Identifying Subjectivity in Value-Laden Arguments (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper