Issue: ValueError when loading DotsOCR model with vLLM - Cannot find model module

#5
by ChangHyunBae - opened

Problem Description

When trying to deploy DotsOCR model using vLLM with Docker, I encounter the following error:

ValueError: Cannot find model module. 'DotsOCRForCausalLM' is not a registered model in the Transformers library (only relevant if the model is meant to be in Transformers) and 'AutoModel' is not present in the model config's 'auto_map' (relevant if the model is custom).

Setup Details

  • vLLM version: 0.9.1 (using rednotehilab/dots.ocr:vllm-openai-v0.9.1)
  • Docker: Yes
  • Model: DotsOCR downloaded via tools/download_model.py

Configuration

Dockerfile:

FROM rednotehilab/dots.ocr:vllm-openai-v0.9.1

# Clone repo
RUN git clone https://github.com/rednote-hilab/dots.ocr.git /app
WORKDIR /app

# Download model
RUN python3 tools/download_model.py

# Environment setup
ENV hf_model_path=./weights/DotsOCR
ENV PYTHONPATH="/app:/app/dots_ocr:/app/weights:/app/weights/DotsOCR:$PYTHONPATH"

# Register model with vLLM
RUN sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
from DotsOCR import modeling_dots_ocr_vllm' $(which vllm)

ENV TZ=Asia/Seoul

vLLM launch command:

vllm serve --model ${hf_model_path} --tensor-parallel-size=1 --trust-remote-code --gpu-memory-utilization 0.7 --served-model-name dotsocr --chat-template-content-format string

What I've tried:

  1. ✅ Verified model files exist in /app/weights/DotsOCR/
  2. ✅ Verified PYTHONPATH includes model directory
  3. ✅ Successfully imported from DotsOCR import modeling_dots_ocr_vllm manually

vllm or sglan support please!

redmoe-ai-v1 changed discussion status to closed

Sign up or log in to comment