JianGuanTHU commited on
Commit
d30eb2d
·
1 Parent(s): 973737d

initialize

Browse files
Files changed (2) hide show
  1. .DS_Store +0 -0
  2. README.md +18 -0
.DS_Store CHANGED
Binary files a/.DS_Store and b/.DS_Store differ
 
README.md CHANGED
@@ -1,4 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # ViLaBench
 
2
  This is a web project showcasing a collection of benchmarks for vision-language models.
3
 
4
  These benchmark and result data are carefully compiled and merged from technical reports and official blogs of renowned multimodal models, including Google's Gemini series ([Gemini 2.5 Report](https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf)), OpenAI GPT series and OpenAI o series ([OpenAI o3 and o4-mini](https://openai.com/index/introducing-o3-and-o4-mini/)), [Seed1.5-VL](https://arxiv.org/pdf/2505.07062), [MiMo-VL](https://arxiv.org/pdf/2506.03569), [Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506), [Qwen2.5-VL](https://arxiv.org/pdf/2502.13923), [InternVL3](https://arxiv.org/abs/2504.10479), and other leading models' official technical documentation.
@@ -22,15 +36,19 @@ This collection provides researchers and developers with a comprehensive, standa
22
  ## Local Usage
23
 
24
  1. Ensure `vilabench.csv` and `index.html` are in the same directory
 
25
  2. Use a local server to open the webpage (to avoid CORS issues):
 
26
  ```bash
27
  python3 -m http.server 8000
28
  ```
 
29
  3. Visit `http://localhost:8000` in your browser
30
 
31
  ## Data Format
32
 
33
  The CSV file contains the following columns:
 
34
  - Benchmark: Benchmark name
35
  - URL: Paper link
36
  - year: Publication year
 
1
+ ---
2
+ title: ViLaBench
3
+ emoji: 🧠
4
+ colorFrom: yellow
5
+ colorTo: indigo
6
+ sdk: static
7
+ pinned: false
8
+ license: apache-2.0
9
+ short_description: Benchmark collection for Vision-Language Models (VLMs)
10
+ ---
11
+
12
+
13
+
14
  # ViLaBench
15
+
16
  This is a web project showcasing a collection of benchmarks for vision-language models.
17
 
18
  These benchmark and result data are carefully compiled and merged from technical reports and official blogs of renowned multimodal models, including Google's Gemini series ([Gemini 2.5 Report](https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf)), OpenAI GPT series and OpenAI o series ([OpenAI o3 and o4-mini](https://openai.com/index/introducing-o3-and-o4-mini/)), [Seed1.5-VL](https://arxiv.org/pdf/2505.07062), [MiMo-VL](https://arxiv.org/pdf/2506.03569), [Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506), [Qwen2.5-VL](https://arxiv.org/pdf/2502.13923), [InternVL3](https://arxiv.org/abs/2504.10479), and other leading models' official technical documentation.
 
36
  ## Local Usage
37
 
38
  1. Ensure `vilabench.csv` and `index.html` are in the same directory
39
+
40
  2. Use a local server to open the webpage (to avoid CORS issues):
41
+
42
  ```bash
43
  python3 -m http.server 8000
44
  ```
45
+
46
  3. Visit `http://localhost:8000` in your browser
47
 
48
  ## Data Format
49
 
50
  The CSV file contains the following columns:
51
+
52
  - Benchmark: Benchmark name
53
  - URL: Paper link
54
  - year: Publication year