zephyr-7b-dpo-full-alpha_0.5_batch128

This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1119
  • Rewards/chosen: -0.5472
  • Rewards/rejected: -1.1549
  • Rewards/accuracies: 0.75
  • Rewards/margins: 0.6076
  • Logps/rejected: -375.6880
  • Logps/chosen: -336.6998
  • Logits/rejected: -0.4187
  • Logits/chosen: -0.8032

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.8666 0.2093 100 0.1307 -0.4641 -1.0037 0.7282 0.5395 -360.5680 -328.3875 -0.7152 -1.0291
2.5975 0.4186 200 0.1221 -0.5115 -1.1127 0.7440 0.6012 -371.4751 -333.1310 -0.1960 -0.5634
0.0974 0.6279 300 0.1175 -0.5182 -1.0980 0.7540 0.5798 -370.0028 -333.7931 -0.6215 -0.9932
0.0828 0.8373 400 0.1121 -0.5476 -1.1540 0.7480 0.6065 -375.6061 -336.7349 -0.4274 -0.8105

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.2.1+cu118
  • Datasets 2.14.7
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for YeongminKim/zephyr-7b-dpo-full-alpha_0.5_batch128

Finetuned
(391)
this model

Dataset used to train YeongminKim/zephyr-7b-dpo-full-alpha_0.5_batch128