Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
tpo-alignment
/
Mistral-Instruct-7B-TPO-y2-v0.1
like
0
Follow
TPO
5
Safetensors
princeton-nlp/mistral-instruct-ultrafeedback
mistral
alignment-handbook
Generated from Trainer
arxiv:
2405.16681
License:
mit
Model card
Files
Files and versions
Community
cf7635f
Mistral-Instruct-7B-TPO-y2-v0.1
Commit History
Update config.json
cf7635f
verified
sahsaeedi
commited on
Jan 23
Upload tokenizer
7103645
verified
sahsaeedi
commited on
Jan 23
Upload MistralForCausalLM
6decec5
verified
sahsaeedi
commited on
Jan 23
initial commit
af8d508
verified
sahsaeedi
commited on
Jan 23