tyzhu's picture
End of training
fc26641 verified
metadata
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
  - generated_from_trainer
datasets:
  - tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
  - accuracy
model-index:
  - name: >-
      lmind_hotpot_train8000_eval7405_v1_doc_qa_meta-llama_Llama-2-7b-hf_5e-5_lora2
    results:
      - task:
          name: Causal Language Modeling
          type: text-generation
        dataset:
          name: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
          type: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.5800506329113924

lmind_hotpot_train8000_eval7405_v1_doc_qa_meta-llama_Llama-2-7b-hf_5e-5_lora2

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1776
  • Accuracy: 0.5801

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.2112 1.0 1089 1.8415 0.5941
1.1804 2.0 2178 1.8224 0.5972
1.1234 3.0 3267 1.8223 0.5970
1.0733 4.0 4357 1.8320 0.5968
1.0268 5.0 5446 1.8681 0.5968
0.9619 6.0 6535 1.9226 0.5946
0.905 7.0 7624 2.0011 0.5925
0.8459 8.0 8714 2.1081 0.5894
0.7885 9.0 9803 2.2146 0.5881
0.7547 10.0 10892 2.3650 0.5861
0.7093 11.0 11981 2.4565 0.5852
0.6557 12.0 13071 2.5679 0.5823
0.609 13.0 14160 2.5864 0.5835
0.5777 14.0 15249 2.7236 0.5821
0.556 15.0 16338 2.8375 0.5817
0.5114 16.0 17428 2.9108 0.5816
0.4872 17.0 18517 2.9641 0.5807
0.4519 18.0 19606 3.0177 0.5807
0.4173 19.0 20695 3.0646 0.5801
0.3947 20.0 21780 3.1776 0.5801

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.14.1