Dataset Viewer
The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Server-side error
Error code:   ClientConnectionError

L3Cube-IndicQuest: A Benchmark Question Answering Dataset for Evaluating Knowledge of LLMs in Indic Context (LLM Factual Accuracy Benchmark)

L3Cube-IndicQuest is a dataset comprising 4,000 question-answer pairs across 20 languages, including English, Assamese, Bengali, Dogri, Gujarati, Hindi, Kannada, Konkani, Maithili, Malayalam, Marathi, Meitei (Manipuri), Nepali, Odia, Punjabi, Sanskrit, Sindhi, Tamil, Telugu, and Urdu. This dataset is designed to assess the knowledge representation of these Indic Languages in multilingual Large Language Models (LLMs) within the Indian context.

The dataset spans five domains: Literature, History, Geography, Politics, and Economics, with each language featuring 200 questions. The questions were originally created in English, verified for accuracy, and then translated into other languages. IndicQuest serves as a valuable benchmark for evaluating LLMs in multilingual settings. For further details, please refer to our research paper.

Dataset Structure

The dataset contains question-answer pairs. Each entry includes a question, its corresponding answer, the language of the question/answer, and the domain it belongs to.

We use an LLM-as-a-judge evaluation approach with the prompt available here. The dataset also includes ground truth answers, allowing the use of comparison-based metrics such as F1-score, ROUGE-L, and BLEU for quantitative evaluation.

Languages

The dataset covers 20 languages:

  • English (en)
  • Assamese (as)
  • Bengali (bn)
  • Dogri (dgo)
  • Gujarati (gu)
  • Hindi (hi)
  • Kannada (kn)
  • Konkani (kok)
  • Maithili (mai)
  • Malayalam (ml)
  • Marathi (mr)
  • Meitei (Manipuri) (mni)
  • Nepali (ne)
  • Odia (or)
  • Punjabi (pa)
  • Sanskrit (sa)
  • Sindhi (sd)
  • Tamil (ta)
  • Telugu (te)
  • Urdu (ur)

Domains

The dataset covers five domains:

  • Literature
  • History
  • Geography
  • Politics
  • Economics

Citation

@article{rohera2024l3cube,
  title={L3Cube-IndicQuest: A Benchmark Question Answering Dataset for Evaluating Knowledge of LLMs in Indic Context},
  author={Rohera, Pritika and Ginimav, Chaitrali and Salunke, Akanksha and Sawant, Gayatri and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2409.08706},
  year={2024}
}
Downloads last month
123