tea-gpt2-lora-sft
Model Description
This model is a LoRA fine-tuned GPT-2 model using the PEFT library. It was trained on a domain-specific dataset related to [insert topic here, e.g., "tea conversations", "Sri Lankan literature", etc.].
- Base model:
gpt2
- Fine-tuning method: Low-Rank Adaptation (LoRA)
- Framework: Hugging Face Transformers + PEFT
- Quantization: [if applicable, e.g., int8, bf16]
Intended Use
You can use this model for:
- Text generation in tea cultivation industry
- Creative writing or domain-specific chatbots
How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("gpt2")
lora_model = PeftModel.from_pretrained(base_model, "nimeth02/tea-gpt2-lora-sft")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
input_text = "Once upon a time in a tea garden"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output = lora_model.generate(input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Model tree for nimeth02/tea-gpt2-lora-sft
Base model
openai-community/gpt2