Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
configs:
|
3 |
+
- config_name: default
|
4 |
+
data_files:
|
5 |
+
- split: train
|
6 |
+
path: "activitynet_captions_train_rewritten.json"
|
7 |
+
- split: val1
|
8 |
+
path: "activitynet_captions_val1_rewritten.json"
|
9 |
+
- split: val2
|
10 |
+
path: "activitynet_captions_val2_rewritten.json"
|
11 |
+
task_categories:
|
12 |
+
- text-to-video
|
13 |
+
- text-retrieval
|
14 |
+
- video-classification
|
15 |
+
language:
|
16 |
+
- en
|
17 |
+
size_categories:
|
18 |
+
- 10K<n<100K
|
19 |
+
---
|
20 |
+
## About
|
21 |
+
|
22 |
+
[ActivityNet Captions](https://openaccess.thecvf.com/content_iccv_2017/html/Krishna_Dense-Captioning_Events_in_ICCV_2017_paper.html) contains 20K long-form videos (180s as average length) from YouTube and 100K captions. Most of the videos contain over 3 annotated events. We follow the existing works to concatenate multiple short temporal descriptions into long sentences and evaluate ‘paragraph-to-video’ retrieval on this benchmark.
|
23 |
+
|
24 |
+
We adopt the official split:
|
25 |
+
- **Train:** 10,009 videos, 10,009 captions (concatenate from 37,421 short captions)
|
26 |
+
- **Test (Val1):** 4,917 videos, 4,917 captions (concatenate from 17,505 short captions)
|
27 |
+
- **Val2:** 4,885 videos, 4,885 captions (concatenate from 17,031 short captions)
|
28 |
+
|
29 |
+
---
|
30 |
+
|
31 |
+
## Get Raw Videos
|
32 |
+
|
33 |
+
```bash
|
34 |
+
cat ActivityNet_Videos.tar.part-* | tar -vxf -
|
35 |
+
```
|
36 |
+
|
37 |
+
---
|
38 |
+
|
39 |
+
## Official Release
|
40 |
+
|
41 |
+
ActivityNet Official Release: [ActivityNet Download](http://activity-net.org/download.html)
|
42 |
+
|
43 |
+
---
|
44 |
+
|
45 |
+
## 🌟 Citation
|
46 |
+
|
47 |
+
```bibtex
|
48 |
+
@inproceedings{caba2015activitynet,
|
49 |
+
title={Activitynet: A large-scale video benchmark for human activity understanding},
|
50 |
+
author={Caba Heilbron, Fabian and Escorcia, Victor and Ghanem, Bernard and Carlos Niebles, Juan},
|
51 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
52 |
+
year={2015}
|
53 |
+
}
|
54 |
+
```
|