svg-code-generator / README.md
vinoku89's picture
Upload README.md with huggingface_hub
31c6982 verified
metadata
license: apache-2.0
base_model: qwen3-0.6B
tags:
  - code-generation
  - svg
  - fine-tuned
  - fp16
  - vllm
  - merged
language:
  - en
pipeline_tag: text-generation
library_name: transformers
model_type: qwen
inference: true
torch_dtype: float16
widget:
  - example_title: Simple Circle
    text: Create a red circle
  - example_title: Rectangle with Border
    text: Draw a blue rectangle with black border
  - example_title: Complex Shape
    text: Generate a star with 5 points in yellow

SVG Code Generator

This is a fine-tuned model for generating SVG code from natural language descriptions. The model has been merged with the base model weights and optimized in fp16 format.

Model Details

  • Model Name: model_v15
  • Base Model: qwen3-0.6B
  • Training Method: Fine-tuning with merged weights
  • Task: Text-to-SVG code generation
  • Model Type: Merged Qwen model
  • Precision: fp16
  • Library: Transformers, vLLM compatible
  • Format: Merged model (not adapter-based)

Usage

With Transformers

Load the model directly using the transformers library:

# Load base model and tokenizer
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("vinoku89/svg-code-generator")
model = AutoModelForCausalLM.from_pretrained("vinoku89/svg-code-generator")


# Generate SVG code
prompt = "Create a blue circle with radius 50"
inputs = tokenizer(prompt, return_tensors="pt")

# Generate with parameters
outputs = model.generate(
    **inputs, 
    max_length=200,
    temperature=0.7,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

# Decode the generated SVG code
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
svg_code = generated_text[len(prompt):].strip()

print("Generated SVG:")
print(svg_code)

With vLLM

This model supports vLLM for high-performance inference in fp16 format.

Training Data

The model was trained on SVG code generation tasks with natural language descriptions.

Intended Use

This model is designed to generate SVG code from text descriptions for educational and creative purposes.

Limitations

  • Generated SVG may require validation
  • Performance depends on prompt clarity
  • Limited to SVG syntax and features seen during training

Model Performance

The model has been fine-tuned specifically for SVG generation tasks with merged weights for optimal performance.

Technical Details

  • Precision: fp16 for memory efficiency
  • Compatibility: vLLM supported for high-throughput inference
  • Architecture: Merged fine-tuned weights (no adapters required)