una-llama-7b / README.md
fblgit's picture
Update README.md
6321d1b
|
raw
history blame
903 Bytes
metadata
license: other
base_model: huggyllama/llama-7b
tags:
  - alignment-handbook
  - generated_from_trainer
model-index:
  - name: una-llama-7b
    results: []

una-llama-7b

UNA: Uniform Neural Alignment It increases 6.75% the performance of the pre-trained base LLaMA (1) 7B.

This model is a fine-tuned version of huggyllama/llama-7b:

  • Loss: 0.5529
  • Rewards/chosen: 0.3633
  • Rewards/rejected: -0.1873
  • Rewards/accuracies: 0.7230
  • Rewards/margins: 0.5506
  • Logps/rejected: -217.7784
  • Logps/chosen: -235.0354
  • Logits/rejected: -0.7752
  • Logits/chosen: -0.5259

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Framework versions

  • Transformers 4.35.0-UNA
  • Pytorch 2.1.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1