--- license: apache-2.0 base_model: teknium/OpenHermes-2.5-Mistral-7B tags: - fine-tuning - dpo - arena-dataset - peft - lora - rlhf datasets: - lmarena-ai/arena-human-preference-55k --- # Mistral-7B DPO Model This model is a Direct Preference Optimization (DPO) version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using LoRA on the Arena Human Preference dataset. ## Training Details - **Base Model**: teknium/OpenHermes-2.5-Mistral-7B - **Dataset**: lmarena-ai/arena-human-preference-55k (1000 samples) - **Method**: Direct Preference Optimization with LoRA (r=16, alpha=32) - **Training Steps**: 100 - **Learning Rate**: 5e-5 - **Beta**: 0.1 ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model base_model = AutoModelForCausalLM.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B") tokenizer = AutoTokenizer.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B") # Load fine-tuned model model = PeftModel.from_pretrained(base_model, "gCao/mistral-7b-dpo-arena") # Generate prompt = "### Instruction:\nExplain machine learning\n\n### Response:\n" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ```