Model Card for Lara — Hybrid Code Model (DeepSeek + StableCode)
Lara is a hybrid fine‑tuned code generation & completion model built from
DeepSeek‑Coder 6.7B and StableCode Alpha 3B‑4K.
Designed for general‑purpose programming — from quick completions to multi‑file scaffolding —
and optionally capable of Chandler Bing‑style sarcastic commentary for developer amusement.
MIT licensed — free to use, modify, and redistribute.
Model Details
- Developed by: @dgtalbug
- Funded by: Self‑funded
- Shared by: @dgtalbug
- Model type: Causal Language Model for code generation & completion
- Language(s): English (primary), multilingual code comments possible
- License: MIT
- Finetuned from:
Model Sources
- Repository: https://huggingface.co/dgtalbug/lara
- Paper: N/A (based on open‑source models)
- Demo: Coming soon
Uses
Direct Use
- Code completion in IDEs
- Script & function generation
- Annotated code examples for learning
- Humorous coding commentary (optional, via prompt)
Downstream Use
- Fine‑tune for a single language (e.g., Java‑only bot)
- Integrate into AI coding assistants
- Educational & training platforms
Out‑of‑Scope Use
- Malicious code generation
- Non‑code general chat
- Security‑critical code without review
Bias, Risks, and Limitations
- May hallucinate APIs or syntax
- Humor mode may inject irrelevant lines
- Biases from public code sources may appear in output
Recommendations
- Always review generated code before deployment
- Use sarcasm mode in casual or learning contexts, not production
- Test code in sandbox environments
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "dgtalbug/lara"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")
prompt = "Write a Python function to reverse a string"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for dgtalbug/lara
Base model
deepseek-ai/deepseek-coder-6.7b-instruct