Update model card with Mono-InternVL-1.5 paper details and expanded information (#2)
Browse files- Update model card with Mono-InternVL-1.5 paper details and expanded information (a76dfd546961393e59fb9d5be08dbe5ced869ee1)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
@@ -1,38 +1,419 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
-
pipeline_tag: image-text-to-text
|
4 |
-
library_name: transformers
|
5 |
base_model:
|
6 |
-
|
7 |
-
base_model_relation: merge
|
8 |
language:
|
9 |
-
|
|
|
|
|
|
|
10 |
tags:
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
|
|
16 |
---
|
17 |
|
18 |
# Mono-InternVL-2B-S1-2
|
19 |
|
20 |
-
This repository contains the Mono-InternVL-2B model after **S1.1 concept learning** and **S1.2 semantic learning**.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
-
|
23 |
|
|
|
|
|
24 |
|
|
|
25 |
|
26 |
-
|
|
|
|
|
|
|
|
|
27 |
|
28 |
-
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
|
33 |
author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
|
34 |
journal={arXiv preprint arXiv:2410.08202},
|
35 |
year={2024}
|
36 |
}
|
37 |
-
```
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
+
- internlm/internlm2-chat-1_8b
|
|
|
4 |
language:
|
5 |
+
- multilingual
|
6 |
+
library_name: transformers
|
7 |
+
license: mit
|
8 |
+
pipeline_tag: image-text-to-text
|
9 |
tags:
|
10 |
+
- internvl
|
11 |
+
- vision
|
12 |
+
- ocr
|
13 |
+
- custom_code
|
14 |
+
- moe
|
15 |
+
base_model_relation: merge
|
16 |
---
|
17 |
|
18 |
# Mono-InternVL-2B-S1-2
|
19 |
|
20 |
+
This repository contains the Mono-InternVL-2B model, specifically the checkpoint after **S1.1 concept learning** and **S1.2 semantic learning**. This model is part of the work detailed in the paper [Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models](https://huggingface.co/papers/2507.12566).
|
21 |
+
|
22 |
+
For more detailed information, please refer to our [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl).
|
23 |
+
|
24 |
+
## π° News
|
25 |
+
- **2025.7**: We introduce [**Mono-InternVL-1.5**](https://arxiv.org/abs/2507.12566), a cheaper and faster monolithic MLLM with visual attention experts, improved training strategy (EViP++) and fused cuda kernel for multimodal MoE.
|
26 |
+
- **2025.3**: We release the SFT code on LLaVA-v1.5-mix665k dataset. We also release the [258M synthetic data](https://huggingface.co/datasets/OpenGVLab/Mono-InternVL-2B-Synthetic-Data) used in S1.2 to boost future research.
|
27 |
+
- **2025.2**: ππ Mono-InternVL is accepted by **CVPR 2025**. Also check out our [**SynerGen-VL**](https://huggingface.co/papers/2412.09604) (CVPR 2025) that extends the monolithic structure to unified image generation and multimodal understanding, which will be open-sourced soon.
|
28 |
+
- **2024.11**: Mono-InternVL is supported by [lmdeploy](https://github.com/InternLM/lmdeploy/pull/2727).
|
29 |
+
- **2024.11**: Mono-InternVL is supported by [vllm](https://github.com/vllm-project/vllm/pull/9528).
|
30 |
+
|
31 |
+
## βοΈ Introduction
|
32 |
+
|
33 |
+
We release Mono-InternVL, a **monolithic** multimodal large language model (MLLM) that integrates visual encoding and textual decoding into a single LLM. In Mono-InternVL, a set of visual experts is embedded into the pre-trained LLM via a **mixture-of-experts (MoE) mechanism**. By freezing the LLM, Mono-InternVL ensures that visual capabilities are optimized without compromising the pre-trained language knowledge. Based on this structure, an innovative **Endogenous Visual Pretraining (EViP)** is introduced to realize coarse-to-fine visual learning.
|
34 |
+
|
35 |
+
Mono-InternVL achieves superior performance compared to state-of-the-art MLLM Mini-InternVL-2B-1.5 and significantly outperforms other monolithic MLLMs, as shown in the radar chart above. Meanwhile, it achieves better deployment efficiency, with first token latency reduced by up to 67%.
|
36 |
+
|
37 |
+
For more details, please refer to our [paper (V1)](https://arxiv.org/abs/2410.08202) and [paper (V1.5)](https://arxiv.org/abs/2507.12566).
|
38 |
+
|
39 |
+
## π Performance
|
40 |
+
| Benchmark | Chameleon-7B | EVE-7B (HD) | Emu3 | Mini-InternVL-2B-1-5 | Mono-InternVL-2B |
|
41 |
+
| :--------------------------: | :----------: | :---------: | :--------: | :------------------: | :--------------: |
|
42 |
+
| Type | Monolithic | Monolithic | Monolithic | Modular | Monolithic |
|
43 |
+
| #Activated Params | 7B | 7B | 8B | 2.2B | 1.8B |
|
44 |
+
| | | | | | |
|
45 |
+
| MMVet | 8.3 | 25.7 | 37.2 | 39.3 | 40.1 |
|
46 |
+
| MMMU<sub>val</sub> | 25.4 | 32.6 | 31.6 | 34.6 | 33.7 |
|
47 |
+
| MME<sub>sum</sub> | 170 | 1628 | β | 1902 | 1875 |
|
48 |
+
| MMBench-EN<sub>test</sub> | 31.1 | 52.3 | 58.5 | 70.9 | 65.5 |
|
49 |
+
| MathVista<sub>testmini</sub> | 22.3 | 34.2 | β | 41.1 | 45.7 |
|
50 |
+
| SEED-Image | 30.6 | 64.6 | 68.2 | 69.8 | 67.4 |
|
51 |
+
| OCRBench | 7 | 398 | 687 | 654 | 767 |
|
52 |
+
| Hallusion-Bench | 17.1 | 26.4 | β | 37.5 | 34.8 |
|
53 |
+
| CCBench<sub>dev</sub> | 3.5 | 16.3 | β | 63.5 | 66.3 |
|
54 |
+
| Avg<sub>multimodal</sub> | 16.1 | 38.9 | β | 54.4 | 55.2 |
|
55 |
+
| | | | | | |
|
56 |
+
| TextVQA<sub>val</sub> | 4.8 | 56.8 | 64.7 | 70.5 | 72.6 |
|
57 |
+
| SQA-I<sub>test</sub> | 47.2 | 64.9 | 89.2 | 84.9 | 93.6 |
|
58 |
+
| GQA<sub>test</sub> | β | 62.6 | 60.3 | 61.6 | 59.5 |
|
59 |
+
| DocVQA<sub>test</sub> | 1.5 | 53.0 | 76.3 | 85.0 | 80.0 |
|
60 |
+
| AI2D<sub>test</sub> | 46.0 | 61.0 | 70.0 | 69.8 | 68.6 |
|
61 |
+
| ChartQA<sub>test</sub> | 2.9 | 59.1 | 68.6 | 74.8 | 73.7 |
|
62 |
+
| InfoVQA<sub>test</sub> | 5.0 | 25.0 | 43.8 | 55.4 | 43.0 |
|
63 |
+
| Avg<sub>VQA</sub> | 17.9 | 54.6 | 67.6 | 71.7 | 70.1 |
|
64 |
+
|
65 |
+
> * Sources of the results include the original papers, our evaluation with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), and [OpenCompass](https://rank.opencompass.org.cn/leaderboard-multimodal/?m=REALTIME).
|
66 |
+
> * Average scores are computed by normalizing each metric to a range between 0 and 100.
|
67 |
+
> * Please note that evaluating the same model using different testing toolkits can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
|
68 |
+
|
69 |
+
|
70 |
+
## π Inference
|
71 |
+
|
72 |
+
We provide an example code to run Mono-InternVL-2B inference using `transformers`.
|
73 |
+
|
74 |
+
> Please use transformers==4.37.2 to ensure the model works normally.
|
75 |
+
|
76 |
+
<details>
|
77 |
+
<summary>Inference with Transformers (click to expand)</summary>
|
78 |
+
|
79 |
+
```python
|
80 |
+
import numpy as np
|
81 |
+
import torch
|
82 |
+
import torchvision.transforms as T
|
83 |
+
from decord import VideoReader, cpu
|
84 |
+
from PIL import Image
|
85 |
+
from torchvision.transforms.functional import InterpolationMode
|
86 |
+
from transformers import AutoModel, AutoTokenizer
|
87 |
+
|
88 |
+
IMAGENET_MEAN = (0.485, 0.456, 0.406)
|
89 |
+
IMAGENET_STD = (0.229, 0.224, 0.225)
|
90 |
+
|
91 |
+
def build_transform(input_size):
|
92 |
+
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
|
93 |
+
transform = T.Compose([
|
94 |
+
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
|
95 |
+
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
|
96 |
+
T.ToTensor(),
|
97 |
+
T.Normalize(mean=MEAN, std=STD)
|
98 |
+
])
|
99 |
+
return transform
|
100 |
+
|
101 |
+
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
|
102 |
+
best_ratio_diff = float('inf')
|
103 |
+
best_ratio = (1, 1)
|
104 |
+
area = width * height
|
105 |
+
for ratio in target_ratios:
|
106 |
+
target_aspect_ratio = ratio[0] / ratio[1]
|
107 |
+
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
|
108 |
+
if ratio_diff < best_ratio_diff:
|
109 |
+
best_ratio_diff = ratio_diff
|
110 |
+
best_ratio = ratio
|
111 |
+
elif ratio_diff == best_ratio_diff:
|
112 |
+
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
|
113 |
+
best_ratio = ratio
|
114 |
+
return best_ratio
|
115 |
+
|
116 |
+
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
|
117 |
+
orig_width, orig_height = image.size
|
118 |
+
aspect_ratio = orig_width / orig_height
|
119 |
+
|
120 |
+
# calculate the existing image aspect ratio
|
121 |
+
target_ratios = set(
|
122 |
+
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
|
123 |
+
i * j <= max_num and i * j >= min_num)
|
124 |
+
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
|
125 |
+
|
126 |
+
# find the closest aspect ratio to the target
|
127 |
+
target_aspect_ratio = find_closest_aspect_ratio(
|
128 |
+
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
|
129 |
+
|
130 |
+
# calculate the target width and height
|
131 |
+
target_width = image_size * target_aspect_ratio[0]
|
132 |
+
target_height = image_size * target_aspect_ratio[1]
|
133 |
+
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
|
134 |
+
|
135 |
+
# resize the image
|
136 |
+
resized_img = image.resize((target_width, target_height))
|
137 |
+
processed_images = []
|
138 |
+
for i in range(blocks):
|
139 |
+
box = (
|
140 |
+
(i % (target_width // image_size)) * image_size,
|
141 |
+
(i // (target_width // image_size)) * image_size,
|
142 |
+
((i % (target_width // image_size)) + 1) * image_size,
|
143 |
+
((i // (target_width // image_size)) + 1) * image_size
|
144 |
+
)
|
145 |
+
# split the image
|
146 |
+
split_img = resized_img.crop(box)
|
147 |
+
processed_images.append(split_img)
|
148 |
+
assert len(processed_images) == blocks
|
149 |
+
if use_thumbnail and len(processed_images) != 1:
|
150 |
+
thumbnail_img = image.resize((image_size, image_size))
|
151 |
+
processed_images.append(thumbnail_img)
|
152 |
+
return processed_images
|
153 |
+
|
154 |
+
def load_image(image_file, input_size=448, max_num=12):
|
155 |
+
image = Image.open(image_file).convert('RGB')
|
156 |
+
transform = build_transform(input_size=input_size)
|
157 |
+
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
|
158 |
+
pixel_values = [transform(image) for image in images]
|
159 |
+
pixel_values = torch.stack(pixel_values)
|
160 |
+
return pixel_values
|
161 |
+
|
162 |
+
|
163 |
+
path = 'OpenGVLab/Mono-InternVL-2B'
|
164 |
+
model = AutoModel.from_pretrained(
|
165 |
+
path,
|
166 |
+
torch_dtype=torch.bfloat16,
|
167 |
+
low_cpu_mem_usage=True,
|
168 |
+
trust_remote_code=True).eval().cuda()
|
169 |
+
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
|
170 |
+
|
171 |
+
# set the max number of tiles in `max_num`
|
172 |
+
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
|
173 |
+
generation_config = dict(max_new_tokens=1024, do_sample=True)
|
174 |
+
|
175 |
+
# pure-text conversation
|
176 |
+
question = 'Hello, who are you?'
|
177 |
+
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
|
178 |
+
print(f'User: {question}
|
179 |
+
Assistant: {response}')
|
180 |
+
|
181 |
+
question = 'Can you tell me a story?'
|
182 |
+
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
|
183 |
+
print(f'User: {question}
|
184 |
+
Assistant: {response}')
|
185 |
+
|
186 |
+
# single-image single-round conversation
|
187 |
+
question = '<image>
|
188 |
+
Please describe the image shortly.'
|
189 |
+
response = model.chat(tokenizer, pixel_values, question, generation_config)
|
190 |
+
print(f'User: {question}
|
191 |
+
Assistant: {response}')
|
192 |
+
|
193 |
+
# single-image multi-round conversation
|
194 |
+
question = '<image>
|
195 |
+
Please describe the image in detail.'
|
196 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
|
197 |
+
print(f'User: {question}
|
198 |
+
Assistant: {response}')
|
199 |
+
|
200 |
+
question = 'Please write a poem according to the image.'
|
201 |
+
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
|
202 |
+
print(f'User: {question}
|
203 |
+
Assistant: {response}')
|
204 |
+
```
|
205 |
+
|
206 |
+
</details>
|
207 |
+
|
208 |
+
|
209 |
+
<details>
|
210 |
+
<summary>Inference with LMDeploy</summary>
|
211 |
+
|
212 |
+
Please install lmdeploy>=0.6.3 for Mono-InternVL support.
|
213 |
+
|
214 |
+
```python
|
215 |
+
from lmdeploy import pipeline
|
216 |
+
from lmdeploy.vl import load_image
|
217 |
+
|
218 |
+
image = load_image('./examples/image1.jpg')
|
219 |
+
pipe = pipeline('OpenGVLab/Mono-InternVL-2B')
|
220 |
+
response = pipe(('Please describe the image shortly.', image))
|
221 |
+
print(response.text)
|
222 |
+
```
|
223 |
+
</details>
|
224 |
+
|
225 |
+
## π₯ Supervised Finetuning
|
226 |
|
227 |
+
Currently we provide the supervised finetuning (S2 instruction tuning) code on the LLaVA-v1.5-mix665k dataset. For details on the dataset, please refer to [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA).
|
228 |
|
229 |
+
<details>
|
230 |
+
<summary>Installation</summary>
|
231 |
|
232 |
+
- Clone this repository:
|
233 |
|
234 |
+
```bash
|
235 |
+
git clone https://github.com/OpenGVLab/Mono-InternVL.git
|
236 |
+
```
|
237 |
+
|
238 |
+
- Create a conda virtual environment and activate it:
|
239 |
|
240 |
+
```bash
|
241 |
+
conda create -n monointernvl python=3.9 -y
|
242 |
+
conda activate monointernvl
|
243 |
+
```
|
244 |
|
245 |
+
- Install dependencies using `requirements.txt`:
|
246 |
+
|
247 |
+
```bash
|
248 |
+
pip install -r requirements.txt
|
249 |
+
```
|
250 |
+
|
251 |
+
- Additional: Install `flash-attn==2.5.6`:
|
252 |
+
|
253 |
+
```bash
|
254 |
+
pip install flash-attn==2.5.6 --no-build-isolation
|
255 |
+
```
|
256 |
+
|
257 |
+
Alternatively you can compile from source:
|
258 |
+
|
259 |
+
```bash
|
260 |
+
git clone https://github.com/Dao-AILab/flash-attention.git
|
261 |
+
cd flash-attention
|
262 |
+
git checkout v2.5.6
|
263 |
+
python setup.py install
|
264 |
+
```
|
265 |
+
</details>
|
266 |
+
|
267 |
+
<details>
|
268 |
+
<summary>Dataset Preparation</summary>
|
269 |
+
|
270 |
+
#### LLaVA-v1.5-mix665k Dataset
|
271 |
+
|
272 |
+
1. Download the instruction tuning data:
|
273 |
+
```sh
|
274 |
+
mkdir playground
|
275 |
+
wget https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/llava_v1_5_mix665k.json -P playground/
|
276 |
+
```
|
277 |
+
|
278 |
+
2. Download image datasets:
|
279 |
+
|
280 |
+
- COCO: [train2017](http://images.cocodataset.org/zips/train2017.zip)
|
281 |
+
- GQA: [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)
|
282 |
+
- OCR-VQA: [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing)
|
283 |
+
- TextVQA: [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip)
|
284 |
+
- VisualGenome: [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)
|
285 |
+
|
286 |
+
3. Organize data as follows:
|
287 |
+
|
288 |
+
```none
|
289 |
+
playground/
|
290 |
+
βββ data/
|
291 |
+
β βββ coco/train2017/
|
292 |
+
β βββ gqa/images/
|
293 |
+
β βββ ocr_vqa/images/
|
294 |
+
β βββ textvqa/train_images/
|
295 |
+
β βββ vg/
|
296 |
+
β βββ VG_100K/
|
297 |
+
β βββ VG_100K_2/
|
298 |
+
βββ llava_v1_5_mix665k.json
|
299 |
+
```
|
300 |
+
|
301 |
+
#### Custom Dataset
|
302 |
+
|
303 |
+
For custom dataset, format your data in to a JSONL file, where each entry is a dictionary organized in the following format (similar to `llava_v1_5_mix665k.json`):
|
304 |
+
|
305 |
+
```python
|
306 |
+
{
|
307 |
+
"id": "000000120375",
|
308 |
+
"image": "coco/train2017/000000120375.jpg",
|
309 |
+
"conversations": [
|
310 |
+
{
|
311 |
+
"from": "human",
|
312 |
+
"value": "<image>
|
313 |
+
What type of vehicle is driving down the street in the image?"
|
314 |
+
},
|
315 |
+
{
|
316 |
+
"from": "gpt",
|
317 |
+
"value": "A red sports utility vehicle (SUV) is driving down the street in the image."
|
318 |
+
},
|
319 |
+
{
|
320 |
+
"from": "human",
|
321 |
+
"value": "Is the street crowded with people?"
|
322 |
+
},
|
323 |
+
{
|
324 |
+
"from": "gpt",
|
325 |
+
"value": "Yes, the street is filled with a considerable number of people, which indicates that the area is busy."
|
326 |
+
}
|
327 |
+
# (more turns ...)
|
328 |
+
]
|
329 |
+
}
|
330 |
+
```
|
331 |
+
|
332 |
+
Then modify the metadata file `shell/data_llava_finetune.json`:
|
333 |
+
|
334 |
+
```python
|
335 |
+
{
|
336 |
+
"name of your dataset": {
|
337 |
+
"root": "playground/data/", # combination of "root" and "image" in the JSONL gives the complete image path
|
338 |
+
"annotation": "path to your JSONL",
|
339 |
+
"data_augment": false,
|
340 |
+
"repeat_time": 1,
|
341 |
+
"length": 12345 # change to the actual number of samples in your dataset
|
342 |
+
}
|
343 |
+
}
|
344 |
+
```
|
345 |
+
|
346 |
+
</details>
|
347 |
+
|
348 |
+
<details>
|
349 |
+
<summary>Model Preparation</summary>
|
350 |
+
|
351 |
+
We provide pretrained models of different stages (S1.1 concept learning, S1.2 semantic learning, S1.3 alignment learning).
|
352 |
+
Choose from the following models and download the weights to `workdirs/` folder.
|
353 |
+
|
354 |
+
|
355 |
+
| model name | download | size |
|
356 |
+
| ----------------------- | ---------------------------------------------------------------------- |:------:|
|
357 |
+
| Mono-InternVL-2B-S1-1 | π€ [HF link](https://huggingface.co/OpenGVLab/Mono-InternVL-2B-S1-1) | 6.2 GB |
|
358 |
+
| Mono-InternVL-2B-S1-2 | π€ [HF link](https://huggingface.co/OpenGVLab/Mono-InternVL-2B-S1-2) | 6.2 GB |
|
359 |
+
| Mono-InternVL-2B-S1-3 | π€ [HF link](https://huggingface.co/OpenGVLab/Mono-InternVL-2B-S1-3) | 6.2 GB |
|
360 |
+
|
361 |
+
|
362 |
+
```sh
|
363 |
+
mkdir workdirs
|
364 |
+
cd workdirs/
|
365 |
+
# pip install -U huggingface_hub
|
366 |
+
huggingface-cli download --resume-download --local-dir-use-symlinks False OpenGVLab/Mono-InternVL-2B-S1-1 --local-dir Mono-InternVL-2B-S1-1
|
367 |
+
```
|
368 |
+
|
369 |
+
The directory structure is:
|
370 |
+
|
371 |
+
```sh
|
372 |
+
workdirs/
|
373 |
+
βββ Mono-InternVL-2B-S1-1/
|
374 |
+
βββ Mono-InternVL-2B-S1-2/
|
375 |
+
βββ Mono-InternVL-2B-S1-3/
|
376 |
+
```
|
377 |
+
</details>
|
378 |
+
|
379 |
+
<details>
|
380 |
+
<summary>Training</summary>
|
381 |
+
|
382 |
+
Finetuning takes around 12 hours on 8x A100 (80G) GPUs.
|
383 |
+
|
384 |
+
#### Single Node Multi-GPU
|
385 |
+
```sh
|
386 |
+
MODEL="./workdirs/Mono-InternVL-2B-S1-3" OUTPUT_DIR="./workdirs/mono_internvl_llava_sft" sh shell/mono_internvl_finetune_llava_torchrun.sh
|
387 |
+
```
|
388 |
+
|
389 |
+
#### Slurm Cluster
|
390 |
+
```sh
|
391 |
+
PARTITION="your partition" MODEL="./workdirs/Mono-InternVL-2B-S1-3" OUTPUT_DIR="./workdirs/mono_internvl_llava_sft" sh shell/mono_internvl_finetune_llava_slurm.sh
|
392 |
+
```
|
393 |
+
|
394 |
+
</details>
|
395 |
+
|
396 |
+
|
397 |
+
## π« License
|
398 |
+
|
399 |
+
This project is released under the [MIT License](LICENSE).
|
400 |
+
|
401 |
+
## ποΈ Citation
|
402 |
+
|
403 |
+
If you find this work helpful in your research, please consider giving this repo a star β and citing our paper:
|
404 |
+
|
405 |
+
```bibtex
|
406 |
+
@article{mono_internvl_v1,
|
407 |
title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
|
408 |
author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
|
409 |
journal={arXiv preprint arXiv:2410.08202},
|
410 |
year={2024}
|
411 |
}
|
|
|
412 |
|
413 |
+
@article{mono_internvl_v1.5,
|
414 |
+
title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
|
415 |
+
author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
|
416 |
+
journal={arXiv preprint arXiv:2507.12566},
|
417 |
+
year={2025}
|
418 |
+
}
|
419 |
+
```
|