license: apache-2.0
Dataset Card for MultiVENT 2.0
This dataset card provides details about MultiVENT 2.0, a large-scale, multi-lingual event-centric video retrieval benchmark featuring a collection of more than 218,000 news videos and over 3,900 queries targeting specific world events.
Dataset Details
Dataset Description
MultiVENT 2.0 consists over 218,000 videos, with 108,500 videos for training (MultiVENT Train) and 109,800 for testing MultiVENT Test.
The collection contains all 2,400 videos from original MultiVENT dataset, a carefully curated set of Multilingual Videos of Events with aligned Natural Text, augmented with a subset of videos from Internvid, a corpus containing more than seven million YouTube videos and over 760,000 hours of content.
- Created by: The Human Language Technology Center of Excellence and Johns Hopkins University
- Language(s) (NLP): Arabic, Chinese, English, Korean, Russian, Spanish
- License: apache-2.0
Download instructions
The dataset can be found on huggingface. However, you can't use the datasets library to access the videos because everything is tarred. Instead you need to locally download the dataset and then untar the videos (and audios if you use those).
Step 1: Install git-lfs
The first thing you need to do is make sure that git-lfs is installed, otherwise you won't be able to pull the video and audio tar files.
git lfs install
Step 2: Clone the dataset
After enabling git-lfs, you can now pull the dataset from huggingface.
git clone https://huggingface.co/datasets/hltcoe/MultiVENT2.0
Using tmux is recommended, as downloading all videos will take a while.
Dataset Sources
- Repository: https://github.com/katesanders9/multiVENT
- Paper: https://arxiv.org/abs/2410.11619
- Workshop on Multimodal Augmented Generation via Multimodal Retreival (MAGMaR): https://nlp.jhu.edu/magmar/
Evaluation On Test Set (UPDATED 8/6)
The relevance judgments for the final corpus are provided in multivent_2_test_judgments.jsonl. Several example ranked lists of the baseline systems are also provided in the folder titled ranked_lists/.
The formal evaluation script is evaluate_multivent_test.py, note that you will need to install the ir_measures and numpy packages.
## Example call using a baseline ranked list
python evaluate_multivent_test.py \
-test_annotation_file multivent_2_test_judgments.jsonl \
-user_annotation_file ranked_lists/10pyscene_clip.json
Citations
If publishing work using this dataset, please be sure to cite the following works:
BibTeX:
@misc{kriz2025multivent20massivemultilingual,
title={MultiVENT 2.0: A Massive Multilingual Benchmark for Event-Centric Video Retrieval},
author={Reno Kriz and Kate Sanders and David Etter and Kenton Murray and Cameron Carpenter and Kelly Van Ochten and Hannah Recknor and Jimena Guallar-Blasco and Alexander Martin and Ronald Colaianni and Nolan King and Eugene Yang and Benjamin Van Durme},
year={2025},
eprint={2410.11619},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.11619},
}
@misc{sanders2023multiventmultilingualvideosevents,
title={MultiVENT: Multilingual Videos of Events with Aligned Natural Text},
author={Kate Sanders and David Etter and Reno Kriz and Benjamin Van Durme},
year={2023},
eprint={2307.03153},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2307.03153},
}
Dataset Card Contact
Please feel free to reach out to the MAGMaR Workshop organizers for any questions/comments: magmar@lists.jh.edu.