una-llama-7b

UNA: Uniform Neural Alignment It increases 6.75% the performance of the pre-trained base LLaMA (1) 7B.

This model is a fine-tuned version of huggyllama/llama-7b:

  • Loss: 0.5529
  • Rewards/chosen: 0.3633
  • Rewards/rejected: -0.1873
  • Rewards/accuracies: 0.7230
  • Rewards/margins: 0.5506
  • Logps/rejected: -217.7784
  • Logps/chosen: -235.0354
  • Logits/rejected: -0.7752
  • Logits/chosen: -0.5259

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Framework versions

  • Transformers 4.35.0-UNA
  • Pytorch 2.1.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
11
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for fblgit/una-llama-7b

Finetuned
(17)
this model