YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

suzume-llama-3-8B-multilingual-orpo-borda-top25 - GGUF

Name Quant method Size
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q2_K.gguf Q2_K 2.96GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.IQ3_XS.gguf IQ3_XS 3.28GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.IQ3_S.gguf IQ3_S 3.43GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q3_K_S.gguf Q3_K_S 3.41GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.IQ3_M.gguf IQ3_M 3.52GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q3_K.gguf Q3_K 3.74GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q3_K_M.gguf Q3_K_M 3.74GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q3_K_L.gguf Q3_K_L 4.03GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.IQ4_XS.gguf IQ4_XS 4.18GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_0.gguf Q4_0 4.34GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.IQ4_NL.gguf IQ4_NL 4.38GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_S.gguf Q4_K_S 4.37GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K.gguf Q4_K 4.58GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf Q4_K_M 4.58GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_1.gguf Q4_1 4.78GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q5_0.gguf Q5_0 5.21GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q5_K_S.gguf Q5_K_S 5.21GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q5_K.gguf Q5_K 5.34GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q5_K_M.gguf Q5_K_M 5.34GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q5_1.gguf Q5_1 5.65GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q6_K.gguf Q6_K 6.14GB
suzume-llama-3-8B-multilingual-orpo-borda-top25.Q8_0.gguf Q8_0 7.95GB

Original model description:

license: cc-by-nc-4.0 tags: - generated_from_trainer base_model: lightblue/suzume-llama-3-8B-multilingual model-index: - name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top25_borda results: []

Suzume ORPO

Suzume with Mitsu - a Japanese tree sparrow with honey on it

[Paper] [Dataset]

This is Suzume ORPO, an ORPO trained fine-tune of the lightblue/suzume-llama-3-8B-multilingual model using our lightblue/mitsu dataset.

We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half.

Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model (lightblue/mitsu).

We are currently working on a developing a commerically usable model, so stay tuned for that!

Model list

We have ORPO trained the following models using different proportions of the lightblue/mitsu dataset:

Model results

We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines:

MT-Bench language meta-llama/Meta-Llama-3-8B-Instruct Nexusflow/Starling-LM-7B-beta gpt-3.5-turbo lightblue/suzume-llama-3-8B-multilingual lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75 lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25
Chinese πŸ‡¨πŸ‡³ NaN 6.97 7.55 7.11 7.65 7.77 7.74 7.44
English πŸ‡ΊπŸ‡Έ 7.98 7.92 8.26 7.73 7.98 7.94 7.98 8.22
French πŸ‡«πŸ‡· NaN 7.29 7.74 7.66 7.84 7.46 7.78 7.81
German πŸ‡©πŸ‡ͺ NaN 6.99 7.68 7.26 7.28 7.64 7.7 7.71
Japanese πŸ‡―πŸ‡΅ NaN 6.22 7.84 6.56 7.2 7.12 7.34 7.04
Russian πŸ‡·πŸ‡Ί NaN 8.28 7.94 8.19 8.3 8.74 8.94 8.81

We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages.

Training data

We trained this model using the lightblue/mitsu_full_borda dataset.

Training configuration

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: lightblue/suzume-llama-3-8B-multilingual
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer  # PreTrainedTokenizerFast

load_in_8bit: false
load_in_4bit: false
strict: false

rl: orpo
orpo_alpha: 0.1
remove_unused_columns: false

chat_template: chatml
datasets:
  - path: lightblue/mitsu_top25_borda
    type: orpo.chat_template
    conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_top25_borda
val_set_size: 0.02
output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top25_borda

sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true

use_wandb: true
wandb_project: axolotl
wandb_entity: peterd
wandb_name: mitsu_top25_borda

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 8e-6

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 20
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
special_tokens:
  pad_token: <|end_of_text|>

workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top25_borda

This model is a fine-tuned version of lightblue/suzume-llama-3-8B-multilingual on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0818

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
7.6328 0.05 1 7.7812
7.7158 0.1 2 7.2589
7.2588 0.15 3 4.0580
4.0068 0.19 4 2.4598
2.4438 0.24 5 0.6504
0.6586 0.29 6 0.1129
0.1235 0.34 7 0.1066
0.1273 0.39 8 0.1041
0.1076 0.44 9 0.0987
0.1009 0.48 10 0.0940
0.1172 0.53 11 0.0885
0.1016 0.58 12 0.0867
0.1088 0.63 13 0.0859
0.095 0.68 14 0.0846
0.1101 0.73 15 0.0839
0.0969 0.78 16 0.0832
0.0864 0.82 17 0.0825
0.0918 0.87 18 0.0821
0.0927 0.92 19 0.0819
0.0967 0.97 20 0.0818

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.0

How to cite

@article{devine2024sure,
  title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets},
  author={Devine, Peter},
  journal={arXiv preprint arXiv:2405.18952},
  year={2024}
}

Developer

Peter Devine - (ptrdvn)

Downloads last month
425
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Space using RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf 1