Abstract
Music Arena is an open platform for real-time human preference evaluation of text-to-music models, featuring a standardized protocol, detailed feedback, and privacy guarantees.
We present Music Arena, an open platform for scalable human preference evaluation of text-to-music (TTM) models. Soliciting human preferences via listening studies is the gold standard for evaluation in TTM, but these studies are expensive to conduct and difficult to compare, as study protocols may differ across systems. Moreover, human preferences might help researchers align their TTM systems or improve automatic evaluation metrics, but an open and renewable source of preferences does not currently exist. We aim to fill these gaps by offering *live* evaluation for TTM. In Music Arena, real-world users input text prompts of their choosing and compare outputs from two TTM systems, and their preferences are used to compile a leaderboard. While Music Arena follows recent evaluation trends in other AI domains, we also design it with key features tailored to music: an LLM-based routing system to navigate the heterogeneous type signatures of TTM systems, and the collection of *detailed* preferences including listening data and natural language feedback. We also propose a rolling data release policy with user privacy guarantees, providing a renewable source of preference data and increasing platform transparency. Through its standardized evaluation protocol, transparent data access policies, and music-specific features, Music Arena not only addresses key challenges in the TTM ecosystem but also demonstrates how live evaluation can be thoughtfully adapted to unique characteristics of specific AI domains. Music Arena is available at: https://music-arena.org
Community
Excited to share our beta release of Music Arena, a live evaluation platform for state-of-the-art AI music generation models!
đ§ Listen to the latest models and đłď¸ vote for your favorite
âď¸ http://music-arena.org
âď¸ https://github.com/gclef-cmu/music-arena
đ https://arxiv.org/abs/2507.20900
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CMI-Bench: A Comprehensive Benchmark for Evaluating Music Instruction Following (2025)
- MixAssist: An Audio-Language Dataset for Co-Creative AI Assistance in Music Mixing (2025)
- LeVo: High-Quality Song Generation with Multi-Preference Alignment (2025)
- Video-Guided Text-to-Music Generation Using Public Domain Movie Collections (2025)
- SynthesizeMe! Inducing Persona-Guided Prompts for Personalized Reward Models in LLMs (2025)
- SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning (2025)
- Aligning Large Language Models with Implicit Preferences from User-Generated Content (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper