Dataset Viewer (First 5GB)
npy
listlengths 4
301
| __key__
stringlengths 45
48
| __url__
stringclasses 1
value |
---|---|---|
[[1.7147588729858398,0.350760281085968,-0.0979335606098175,1.2702465057373047,-0.43140438199043274,0(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/440
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[-0.5783407092094421,0.8304975032806396,-0.7774233222007751,-1.1340744495391846,1.2213383913040161,(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/520
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[0.9446958303451538,-1.2723459005355835,1.4548571109771729,-0.5072003602981567,-0.30051806569099426(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/400
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[0.8808898329734802,-0.5626959800720215,-0.2921379804611206,-0.38067981600761414,0.0410653874278068(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/1280
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[-0.4916239380836487,-0.055352210998535156,-0.4237881302833557,-1.5512646436691284,-0.0404360480606(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/280
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[0.8806679248809814,-2.312434196472168,0.6856943964958191,-1.309971809387207,0.6281507611274719,1.3(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/80
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[0.48007309436798096,0.057896267622709274,-1.8924883604049683,-0.6929687261581421,-0.31187656521797(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/800
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[-0.039303962141275406,0.16439718008041382,-1.1714177131652832,-0.8894039392471313,1.60206472873687(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/560
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[0.5045363306999207,-1.1872713565826416,-0.06941100209951401,-0.2427065372467041,1.3635464906692505(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/120
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
[[0.46208807826042175,-0.20729540288448334,0.29665547609329224,0.07572398334741592,1.195133805274963(...TRUNCATED) |
embodied_scan/ScanNet/img_feat/scene0315_00/1200
|
hf://datasets/bigai/MTU3D@72868c02a55f89e544e7d5af63548e4f8e8bf61a/embodied_base.tar.gz.partaa
|
End of preview. Expand
in Data Studio
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
MTU3D Dataset
📄 Paper (MTU3D) | 🧾 Project GitHub
The MTU3D dataset provides all the necessary data for reproducing the experiments in the Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation(ICCV25), including stage1 data for embodied segmentation training, feature saved from stage1, vle stage2 data and embodied_bench_data.
Specifically, we provide the correspondence between *.tar.gz in this Dataset and data.* in the config file:
.tar.gz | data.config | description |
---|---|---|
embodied_base.tar.gz | data.embodied_base | stage1 data |
embodied_feat.tar.gz | data.embodied_feat | feature saved from stage1 |
embodied_vle.tar.gz | data.embodied_vle | vle stage2 data |
The embodied_bench_data.tar.gz is used to change data_set_path
and navigation_data_path
in hm3d-online/*.nav.py.
📌 The dataset is large and stored in split archives. Please download all parts, merge, and extract them before usage.
Citation:
@article{zhu2025mtu,
title = {Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation},
author = {Zhu, Ziyu and Wang, Xilin and Li, Yixuan and Zhang, Zhuofan and Ma, Xiaojian and Chen, Yixin and Jia, Baoxiong and Liang, Wei and Yu, Qian and Deng, Zhidong and Huang, Siyuan and Li, Qing},
journal = {International Conference on Computer Vision (ICCV)},
year = {2025}
}
license: cc-by-4.0
- Downloads last month
- 196