File size: 1,248 Bytes
3cd5473
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
- lmms-lab/LLaVA-OneVision-Data
- lmms-lab/M4-Instruct-Data
- HuggingFaceFV/finevideo
- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
- lmms-lab/LLaVA-Video-178K
- orrzohar/Video-STaR
- Mutonix/Vript
- TIGER-Lab/VISTA-400K
- Enxin/MovieChat-1K_train
- ShareGPT4Video/ShareGPT4Video
pipeline_tag: image-text-to-text
language:
- en
base_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct
tags:
- openvino
- nncf
- 8-bit
---

This model is a quantized version of [`HuggingFaceTB/SmolVLM2-500M-Video-Instruct`](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).

First make sure you have `optimum-intel` installed:

```bash
pip install optimum[openvino]
```

To load your model you can do as follows:

```python
from optimum.intel import OVModelForVisualCausalLM

model_id = "echarlaix/SmolVLM2-500M-Video-Instruct-openvino-8bit-woq-data-free"
model = OVModelForVisualCausalLM.from_pretrained(model_id)
```