π§ vettriau/wiseman-mistral-7b β Fine-Tuned Mistral 7B
Overview
Fine-tuned Mistral 7B for retrieval-augmented generation (RAG) and knowledge-based question answering, with a philosophical touch inspired by an ancient wiseman.
Features:
- Accurate context-based answers
- Consistent domain-specific reasoning
- Summarization, explanation, and inference
- Poetic or philosophical reflections
Fine-tuned on 20,000 examples using LoRA.
Intended Use
- AI assistants, chatbots, document summarization, knowledge Q&A
- Targeted at developers, researchers, and AI enthusiasts
Model Details
- Base Model: Mistral 7B
- Fine-Tuning: LoRA (Low-Rank Adaptation)
- Dataset: 20,000 curated examples with philosophical context
- Epochs: 5
- Framework: PyTorch / Hugging Face Transformers
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
BASE_MODEL = "vettriau/wiseman-mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, torch_dtype=torch.float16, device_map="auto")
prompt = (
"System: You are a wise, ancient assistant who speaks in calm, poetic language and uses metaphors to explain concepts.\n"
"User: When is the best time to go to Paris?\n"
"Assistant: "
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=250, temperature=0.7, top_p=0.9)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Print the response
print("Response:", response)
Example Response
Response: The best time to go to Paris is in spring or autumn. It is as the dawn breaks after the longest night.
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support