Transformers Library KeyError: 'dinov3_vit' Model Not Recognized in Version 4.55.2
Code Snippet:
from transformers import pipeline
from transformers.image_utils import load_image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = load_image(url)
feature_extractor = pipeline(
model="facebook/dinov3-vit7b16-pretrain-lvd1689m",
task="image-feature-extraction",
)
features = feature_extractor(image)
I have installed the latest version of Transformers
transformers==4.55.2
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1270 try:
-> 1271 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1272 except KeyError:
3 frames
KeyError: 'dinov3_vit'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1271 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1272 except KeyError:
-> 1273 raise ValueError(
1274 f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
1275 "but Transformers does not recognize this architecture. This could be because of an "
ValueError: The checkpoint you are trying to load has model type `dinov3_vit` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
This too doesn't work
# Load model directly
from transformers import AutoImageProcessor, AutoModel
processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vit7b16-pretrain-lvd1689m")
model = AutoModel.from_pretrained("facebook/dinov3-vit7b16-pretrain-lvd1689m")
Error Thrown:
`use_fast` is set to `True` but the image processor class does not have a fast version. Falling back to the slow version.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipython-input-457276300.py in <cell line: 0>()
2 from transformers import AutoImageProcessor, AutoModel
3
----> 4 processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vit7b16-pretrain-lvd1689m")
5 model = AutoModel.from_pretrained("facebook/dinov3-vit7b16-pretrain-lvd1689m")
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/image_processing_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
613 "This image processor cannot be instantiated. Please make sure you have `Pillow` installed."
614 )
--> 615 raise ValueError(
616 f"Unrecognized image processor in {pretrained_model_name_or_path}. Should have a "
617 f"`image_processor_type` key in its {IMAGE_PROCESSOR_NAME} of {CONFIG_NAME}, or one of the following "
ValueError: Unrecognized image processor in facebook/dinov3-vit7b16-pretrain-lvd1689m. Should have a `image_processor_type` key in its preprocessor_config.json of config.json, or one of the following `model_type` keys in its config.json: aimv2, aimv2_vision_model, align, aria, beit, bit, blip, blip-2, bridgetower, chameleon, chinese_clip, clip, clipseg, cohere2_vision, conditional_detr, convnext, convnextv2, cvt, data2vec-vision, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dinat, dinov2, donut-swin, dpt, efficientformer, efficientloftr, efficientnet, eomt, flava, focalnet, fuyu, gemma3, gemma3n, git, glm4v, glpn, got_ocr2, grounding-dino, groupvit, hiera, idefics, idefics2, idefics3, ijepa, imagegpt, instructblip, instructblipvideo, janus, kosmos-2, layoutlmv2, layoutlmv3, levit, lightglue, llama4, llava, llava_next, llava_next_video, llava_onevision, mask2former, maskformer, mgp-str, mistral3, mlcd, mllama, mm-grounding-dino, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, nat, nougat, oneformer, owlv2, owlvit, paligemma, perceiver, perception_lm, phi4_multimodal, pix2struct, pixtral, poolformer, prompt_depth_anything, pvt, pvt_v2, qwen2_5_vl, qwen2_vl, regnet, resnet, rt_detr, sam, sam_hq, segformer, seggpt, shieldgemma2, siglip, siglip2, smolvlm, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, table-transformer, timesformer, timm_wrapper, tvlt, tvp, udop, upernet, van, videomae, vilt, vipllava,...
You need to use - pip install git+https://github.com/huggingface/transformers.git
The trasformer version needs to be transformers 4.56.0.dev
I have a somewhat different error: Access to model facebook/dinov3-vith16plus-pretrain-lvd1689m is restricted and you are not in the authorized list.
Even though i got access to the gated model. And previously i have also had access to some other gated models which i was able to access like the llama models and medgemma.
@DrUkachi Please refer to the pinned discussion for this common issue
I have a somewhat different error: Access to model facebook/dinov3-vith16plus-pretrain-lvd1689m is restricted and you are not in the authorized list.
@alitarakzai That's a different problem then. Please refer to the models download FAQ and if not answered, create another issue, thanks