hR)> SrSSKrSSKrSSKrSSKrSSKrSSKrSSKrSSKJ r J r J r J r J r SSKrSSKJr SSKJr SSKJr SSKJr SSKJr SS KJrJr \R8"\R:S 9 \R<"\5r S S S SS.r!Sr"SS\ \R(\ \#\ 4\#4S\#S\ \ \$S\ \ 4Sjjr%SS\#S\#S\ \#S\#S\ \#\ \#\ 444 Sjjr&S\#S\#S\#4Sjr'SS*\#S+\#S,\#S\#S\#S\ \#S-\$S.\#S/\$S0\$S1\(S2\ \#S3\#S4\ \$S5\)S6\#S7\#S8\#S9\#S:\#4(S;jjr*\S<:XGa\+"\RX5S=:XGa:\-"S>5 \-"S?5 \-"S>5 \-"S@5 \-"SA5 \-"SB5 \-"SC5 \-"SD5 \-"SE5 \-"SF5 \-"SG5 \-"SH5 \-"SI5 \-"SJ5 \-"SK5 \-"SL5 \-"SM5 \-"SN5 \-"SO5 \-"SP5 \-"SQ5 \-"SR5 \-"SS5 \-"ST5 \-"SU5 \-"SV5 \-"SW5 \-"SX5 \-"SY5 \-"SZ5 \-"S[5 \-"S\5 \-"S]5 \-"S^5 \-"S_5 \-"S`5 \-"Sa5 \R\"S5 \R^"Sb\R`ScSd9r1\1ReS*SeSf9 \1ReS+SgSf9 \1ReShSSiSj9 \1ReSk/SQSSlSm9 \1ReSn/SoQSSpSm9 \1ReSq/SrQSsSt9 \1ReSu\$SSvSw9 \1ReSxS SySj9 \1ReSz\$S!S{Sw9 \1ReS|\$S"S}Sw9 \1ReS~\(S#SSw9 \1ReSSSf9 \1ReSS$SSj9 \1ReS\$SS9 \1ReSSSS9 \1ReSS%SSj9 \1ReSS&SSj9 \1ReSS'SSj9 \1ReSS(SSj9 \1ReSS)SSj9 \1Rg5r4\*"S0S*\4Rj_S+\4Rl_S,\4Rn_S\4Rp_S\4Rr_S\4Rt_S-\4Rv_S.\4Rx_S/\4Rz_S0\4R|_S1\4R~_S2\4R_S3\4R_S4\4R_S5\4R_S6\4R_S7\4R_S8\4R_S9\4R_S:\4R_6 gg)a Document layout analysis and OCR using dots.ocr with vLLM. This script processes document images through the dots.ocr model to extract layout information, text content, or both. Supports multiple output formats including JSON, structured columns, and markdown. Features: - Layout detection with bounding boxes and categories - Text extraction with reading order preservation - Multiple prompt modes for different tasks - Flexible output formats - Multilingual document support N)AnyDictListOptionalUnion) load_dataset)login)Image) partition_all)tqdm)LLMSamplingParams)levelaxPlease output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox. 1. Bbox format: [x1, y1, x2, y2] 2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. 3. Text Extraction & Formatting Rules: - Picture: For the 'Picture' category, the text field should be omitted. - Formula: Format its text as LaTeX. - Table: Format its text as HTML. - All Others (Text, Title, etc.): Format their text as Markdown. 4. Constraints: - The output text must be the original text from the image, with no translation. - All layout elements must be sorted according to human reading order. 5. Final Output: The entire output must be a single JSON object. aPlease output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.z)Extract the text content from this image.z`Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]). Bounding Box: ) layout-allz layout-onlyocr grounding-ocrc6[RR5(dA[R S5 [R S5 [ R "S5 g[RS[RRS535 g)z+Check if CUDA is available and exit if not.z2CUDA is not available. This script requires a GPU.z0Please run on a machine with a CUDA-capable GPU.zCUDA is available. GPU: rN) torchcuda is_availableloggererrorsysexitinfoget_device_name dots-ocr.pycheck_cuda_availabilityr!Os^ :: " " $ $ IJ GH   .uzz/I/I!/L.MNOrrimagemodebboxreturnc[U[R5(aUnO[U[5(a4SU;a.[R"[R "US55nOC[U[ 5(a[R"U5nO[S[U535e[R "5nURUSS9 S[R"UR55R53n[RU[S5nUS:XaU(aU[ U5-nSS S U0S .S US ./S./$)z,Create chat message for dots.ocr processing.byteszUnsupported image type: PNG)formatzdata:image/png;base64,rruser image_urlurl)typer+text)r-r.)rolecontent) isinstancer dictopenioBytesIOstr ValueErrorr-savebase64 b64encodegetvaluedecode PROMPT_MODESget)r"r#r$pil_imgbufdata_uriprompts r make_dots_messagerCYs %%% E4 W%5**RZZg78 E3  **U#3DK=ABB **,C LLUL#'(8(8(H(O(O(Q'RSH  dL$> ?F 4#d)# $E83DE0  rjsonoutput output_formatfilter_categorycUS:XaUR5$[R"UR55nU(GaSU;a[US5VVs/sHupVXb:XdM UPM nnnUVs/sH oTSUPM snUVs/sH oTSUPM snS.nSU;aUVs/sH oTSUPM snUS'SU;ay/n UR S/5HVn U V s/sH oU;dM U PM n n U (dM%U V s/sHoR U 5PM n n U R U 5 MX U (aXS'UnUS:Xa[R"USS 9$US :XaIUR S/5UR S/5S.nUS :XaUR S/5US'U$/US'U$US :XGaUS :wdSU;a*[RS 5 [R"USS 9$/nUR S/5nUR S/5nUR S/5nU(aZUHSn U HJn U [U5:dMU [U5:dM%UU nUU nUR [UU55 ML MU O1[UU5H!unnUR [UU55 M# SRU5$gs snnfs snfs snfs snfs sn fs sn f![Ra2n[RSU35 UR5sSnA$SnAf[a2n[R!SU35 UR5sSnA$SnAff=f)z6Parse dots.ocr output and convert to requested format.r categoriesbboxes)rJrItexts reading_orderrDF) ensure_ascii structuredrmarkdownz/Markdown format works best with layout-all mode zFailed to parse JSON output: NzError parsing output: )striprDloads enumerater>indexappenddumpsrwarninglenformat_markdown_textzipjoinJSONDecodeError Exceptionr)rErFrGr#dataicatindices filtered_datafiltered_reading_ordergroupidxfiltered_groupremapped_groupresultmd_linesrKrIrLr.categoryes r parse_dots_outputrlsC u}||~Qzz&,,.) |t3'0l1C'D_'DVQH^q'DG_6=>g>!,g>>EFgL1!4gFM $DK)LGqw-*:G)L g&$&)+&!XXor:E5:%MUcWncUN%M%~HV)W--*<)W.55nE ; *5K/2 D F "::d7 7 l *((8R0"hh|R8F |#"&((7B"7wM#%wM j (|#wd':PQzz$U;;HHHWb)E,3J HH_b9M*E$U+c*o0E#(:D'1#H$OO,@x,PQ %+'*%&F *M&N*Xh   6qc:;||~  -aS12||~sAK( K *K 0K(7KK(KK(.K?'K(& K3K9K(K(K#">K(!AK()K(0r rrr column_namesr7selectrangeminrXr rr r listrCchatoutputsr.rQrlrUr]rextendr6 add_columnr1r2 push_to_hub)&rrrr#rFrGrrrrrrrrrrrrrrrdatasetllmsampling_params all_outputs batch_indicesr_ batch_imagesimgbatch_messagesrrEraw_textparsedrkrJrIrKs& r mainrs4/2BJJ*+52::>>*5H H KK#M?34=6G7///|n$>!,11779*8]UYZ""6*"!6 KK23$$]K@ , & !F&$''G6,A fjj267!!&**\2">? VZZ45 b!!!"% R "$$[&9$$_jA$$[%8 * $$$_kB$$]K@ KK+n-./ xH KK34 KK @@PQsIY H LL3A37 8   #a& 12S5FF G G Hs25O7 PO<"A"P<P Q AQQ__main__rzP================================================================================z)dots.ocr Document Layout Analysis and OCRzB This script processes document images using the dots.ocr model toz2extract layout information, text content, or both.z Features:z5- Layout detection with bounding boxes and categoriesz1- Text extraction with reading order preservationz6- Multiple output formats (JSON, structured, markdown)z- Multilingual document supportz Example usage:z) 1. Full layout analysis + OCR (default):z3 uv run dots-ocr.py document-images analyzed-docsz 2. Layout detection only:zE uv run dots-ocr.py scanned-pdfs layout-analysis --mode layout-onlyz 3. Simple OCR (text only):z9 uv run dots-ocr.py documents extracted-text --mode ocrz 4. Convert to markdown:zE uv run dots-ocr.py papers papers-markdown --output-format markdownz 5. Extract only tables:z@ uv run dots-ocr.py reports table-data --filter-category Tablez* 6. Structured output with custom columns:z% uv run dots-ocr.py docs analyzed \z# --output-format structured \z --bbox-column boxes \z --category-column types \z --text-column contentz! 7. Process a subset for testing:z@ uv run dots-ocr.py large-dataset test-output --max-samples 10z 8. Running on HF Jobs:z hf jobs run --gpu l4x1 \z\ -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \zQ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \z your-document-dataset \z your-analyzed-outputzQ ================================================================================z. For full help, run: uv run dots-ocr.py --helpz/Document layout analysis and OCR using dots.ocra Modes: layout-all - Extract layout + text content (default) layout-only - Extract only layout information (bbox + category) ocr - Extract only text content grounding-ocr - Extract text from specific bbox (requires --bbox) Output Formats: json - Raw JSON output from model (default) structured - Separate columns for bboxes, categories, texts markdown - Convert to markdown format Examples: # Basic layout + OCR uv run dots-ocr.py my-docs analyzed-docs # Layout detection only uv run dots-ocr.py papers layouts --mode layout-only # Convert to markdown uv run dots-ocr.py scans readable --output-format markdown # Extract only formulas uv run dots-ocr.py math-docs formulas --filter-category Formula ) descriptionformatter_classepilogz&Input dataset ID from Hugging Face Hub)helpz&Output dataset ID for Hugging Face Hubz--image-columnz)Column containing images (default: image))defaultrz--modez%Processing mode (default: layout-all))choicesrrz--output-format)rDrNrOzOutput format (default: json)z--filter-category) rtrurwrprrrqrxrorvTextrnz!Filter results by layout category)rrz --batch-sizez'Batch size for processing (default: 32))r-rrz--modelz.Model to use (default: rednote-hilab/dots.ocr)z--max-model-lenz-Maximum model context length (default: 24000)z --max-tokensz+Maximum tokens to generate (default: 16384)z--gpu-memory-utilizationz%GPU memory utilization (default: 0.8)z --hf-tokenzHugging Face API tokenz--splitz%Dataset split to use (default: train)z --max-samplesz2Maximum number of samples to process (for testing))r-rz --private store_truezMake output dataset private)actionrz--output-columnz6Column name for JSON output (default: dots_ocr_output)z --bbox-columnzBColumn name for bboxes in structured mode (default: layout_bboxes)z--category-columnzJColumn name for categories in structured mode (default: layout_categories)z --text-columnz@Column name for texts in structured mode (default: layout_texts)z--markdown-columnz3Column name for markdown output (default: markdown))rN)rDNr)r"rrDNryrzr{r|r}Nr~NFrrrrrOr)I__doc__argparser9r4rDloggingrrtypingrrrrrrdatasetsrhuggingface_hubr PILr toolzr tqdm.autor vllmr r basicConfigINFO getLogger__name__rr=r!r6intrCrlrYfloatboolrrXargvprintrArgumentParserRawDescriptionHelpFormatterparser add_argument parse_argsargsrrrr#rFrGrrrrrrrrrrrrrrrrr rs   33 !!$',,'   8 $&@ :}1 8P $% d38nc1 2% % 49 % $Z %T %) ] ]]c]]  ]  3S$Y  ]@scc< %))$'"!%*&.%%+JJJJ  J  J c] JJ JJJ"JsmJ J#JJ"#J$%J&'J()J*+JZ z 388} h 9: h ST BC m EF AB FG /0  ! :; CD +, UV ,- IJ )* UV )* PQ ;< 67 45 -. 12 ,- 23 PQ () ,- op `  /0 +, o ?@   $ $E << F< .VW (/WX  8  E 4   2 ,   W 0     6   ( =    <     :   "  4    +CD 7)P   A  L/L  ! E   Q  # Y   O   B    D((**&&YY  ((  ,, ??jj((?? $::jj$$  ((!"$$#$,,%&$$'(,,)Ur