English
code
dgtalbug commited on
Commit
214a316
·
verified ·
1 Parent(s): 37de25e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -1
README.md CHANGED
@@ -15,4 +15,81 @@ base_model:
15
  - stabilityai/stablecode-completion-alpha-3b-4k
16
  tags:
17
  - code
18
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - stabilityai/stablecode-completion-alpha-3b-4k
16
  tags:
17
  - code
18
+ ---
19
+ # Model Card for Lara — Hybrid Code Model (DeepSeek + StableCode)
20
+
21
+ Lara is a hybrid fine‑tuned **code generation & completion model** built from
22
+ **DeepSeek‑Coder 6.7B** and **StableCode Alpha 3B‑4K**.
23
+ Designed for **general‑purpose programming** — from quick completions to multi‑file scaffolding —
24
+ and optionally capable of **Chandler Bing‑style sarcastic commentary** for developer amusement.
25
+
26
+ MIT licensed — free to use, modify, and redistribute.
27
+
28
+ ---
29
+
30
+ ## Model Details
31
+
32
+ - **Developed by:** [@dgtalbug](https://huggingface.co/dgtalbug)
33
+ - **Funded by:** Self‑funded
34
+ - **Shared by:** [@dgtalbug](https://huggingface.co/dgtalbug)
35
+ - **Model type:** Causal Language Model for code generation & completion
36
+ - **Language(s):** English (primary), multilingual code comments possible
37
+ - **License:** MIT
38
+ - **Finetuned from:**
39
+ - [`deepseek-ai/deepseek-coder-6.7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)
40
+ - [`stabilityai/stablecode-completion-alpha-3b-4k`](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b-4k)
41
+
42
+ ---
43
+
44
+ ## Model Sources
45
+ - **Repository:** [https://huggingface.co/dgtalbug/lara](https://huggingface.co/dgtalbug/lara)
46
+ - **Paper:** N/A (based on open‑source models)
47
+ - **Demo:** Coming soon
48
+
49
+ ---
50
+
51
+ ## Uses
52
+
53
+ ### Direct Use
54
+ - Code completion in IDEs
55
+ - Script & function generation
56
+ - Annotated code examples for learning
57
+ - Humorous coding commentary (optional, via prompt)
58
+
59
+ ### Downstream Use
60
+ - Fine‑tune for a single language (e.g., Java‑only bot)
61
+ - Integrate into AI coding assistants
62
+ - Educational & training platforms
63
+
64
+ ### Out‑of‑Scope Use
65
+ - Malicious code generation
66
+ - Non‑code general chat
67
+ - Security‑critical code without review
68
+
69
+ ---
70
+
71
+ ## Bias, Risks, and Limitations
72
+ - May hallucinate APIs or syntax
73
+ - Humor mode may inject irrelevant lines
74
+ - Biases from public code sources may appear in output
75
+
76
+ ### Recommendations
77
+ - Always review generated code before deployment
78
+ - Use sarcasm mode in casual or learning contexts, not production
79
+ - Test code in sandbox environments
80
+
81
+ ---
82
+
83
+ ## How to Get Started with the Model
84
+
85
+ ```python
86
+ from transformers import AutoModelForCausalLM, AutoTokenizer
87
+
88
+ model_id = "dgtalbug/lara"
89
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
90
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")
91
+
92
+ prompt = "Write a Python function to reverse a string"
93
+ inputs = tokenizer(prompt, return_tensors="pt")
94
+ outputs = model.generate(**inputs, max_new_tokens=150)
95
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))