Papers
arxiv:2509.05208

Symbolic Graphics Programming with Large Language Models

Published on Sep 5
· Submitted by YangyiH on Sep 8
#3 Paper of the day
Authors:
,
,
,
,

Abstract

LLMs generate SVGs from natural-language descriptions using a reinforcement learning approach with verifiable rewards, improving performance and scene coherence.

AI-generated summary

Large language models (LLMs) excel at program synthesis, yet their ability to produce symbolic graphics programs (SGPs) that render into precise visual content remains underexplored. We study symbolic graphics programming, where the goal is to generate an SGP from a natural-language description. This task also serves as a lens into how LLMs understand the visual world by prompting them to generate images rendered from SGPs. Among various SGPs, our paper sticks to scalable vector graphics (SVGs). We begin by examining the extent to which LLMs can generate SGPs. To this end, we introduce SGP-GenBench, a comprehensive benchmark covering object fidelity, scene fidelity, and compositionality (attribute binding, spatial relations, numeracy). On SGP-GenBench, we discover that frontier proprietary models substantially outperform open-source models, and performance correlates well with general coding capabilities. Motivated by this gap, we aim to improve LLMs' ability to generate SGPs. We propose a reinforcement learning (RL) with verifiable rewards approach, where a format-validity gate ensures renderable SVG, and a cross-modal reward aligns text and the rendered image via strong vision encoders (e.g., SigLIP for text-image and DINO for image-image). Applied to Qwen-2.5-7B, our method substantially improves SVG generation quality and semantics, achieving performance on par with frontier systems. We further analyze training dynamics, showing that RL induces (i) finer decomposition of objects into controllable primitives and (ii) contextual details that improve scene coherence. Our results demonstrate that symbolic graphics programming offers a precise and interpretable lens on cross-modal grounding.

Community

Paper author Paper submitter

LLMs are strong at coding, but their ability to write symbolic graphics programs (SGPs) that render images (especially SVGs) is underexplored. This work studies text-to-SGP generation as a probe of visual generation, introducing SGP-GenBench to evaluate object-, scene-, and composition-level performance across open and proprietary models, revealing notable shortcomings. To improve results, we use reinforcement learning with rewards from visual–text similarity scores, which steadily enhances SVG quality and semantic alignment. Experiments show substantial gains, bringing performance close to state-of-the-art closed-source models.

·

Hi, very cool paper!

Would be cool to create a demo (as a Hugging Face Space) so that people can try out SVG generation using your fine-tuned model.

For example https://github.com/simonw/pelican-bicycle has become pretty popular. Simon tracks the progress of LLMs generating "a pelican riding a bicycle" over the years, there's a very cool video on it here: https://youtu.be/YpY83-kA7Bo?si=1IXF6DezDcTp3zfz

We explored RL for SVG Generation on our recent paper: https://arxiv.org/pdf/2505.20793. You should check it out!

·
Paper author

cool work!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.05208 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.05208 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.05208 in a Space README.md to link it from this page.

Collections including this paper 5