--- license: other base_model: apple/OpenELM-270M tags: - trl - orpo - generated_from_trainer model-index: - name: ft-openelm-270m-ultrafeedback results: [] --- # ft-openelm-270m-ultrafeedback This model is a fine-tuned version of [apple/OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 1.6455 - Rewards/chosen: -0.1995 - Rewards/rejected: -0.2029 - Rewards/accuracies: 0.5050 - Rewards/margins: 0.0035 - Logps/rejected: -2.0293 - Logps/chosen: -1.9941 - Logits/rejected: -5.7383 - Logits/chosen: -6.1055 - Nll Loss: 1.5752 - Log Odds Ratio: -0.7037 - Log Odds Chosen: 0.0445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:| | 1.7595 | 0.53 | 100 | 1.6455 | -0.1995 | -0.2029 | 0.5050 | 0.0035 | -2.0293 | -1.9941 | -5.7383 | -6.1055 | 1.5752 | -0.7037 | 0.0445 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2