English
code

Model Card for Lara — Hybrid Code Model (DeepSeek + StableCode)

Lara is a hybrid fine‑tuned code generation & completion model built from
DeepSeek‑Coder 6.7B and StableCode Alpha 3B‑4K.
Designed for general‑purpose programming — from quick completions to multi‑file scaffolding —
and optionally capable of Chandler Bing‑style sarcastic commentary for developer amusement.

MIT licensed — free to use, modify, and redistribute.


Model Details


Model Sources


Uses

Direct Use

  • Code completion in IDEs
  • Script & function generation
  • Annotated code examples for learning
  • Humorous coding commentary (optional, via prompt)

Downstream Use

  • Fine‑tune for a single language (e.g., Java‑only bot)
  • Integrate into AI coding assistants
  • Educational & training platforms

Out‑of‑Scope Use

  • Malicious code generation
  • Non‑code general chat
  • Security‑critical code without review

Bias, Risks, and Limitations

  • May hallucinate APIs or syntax
  • Humor mode may inject irrelevant lines
  • Biases from public code sources may appear in output

Recommendations

  • Always review generated code before deployment
  • Use sarcasm mode in casual or learning contexts, not production
  • Test code in sandbox environments

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "dgtalbug/lara"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")

prompt = "Write a Python function to reverse a string"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dgtalbug/lara

Finetuned
(44)
this model

Datasets used to train dgtalbug/lara