hindi-llama

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1632

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.5858 0.0188 1000 1.4610
1.3662 0.0375 2000 1.3469
1.3174 0.0563 3000 1.3143
1.3003 0.0750 4000 1.2895
1.2931 0.0938 5000 1.2762
1.2786 0.1125 6000 1.2649
1.2541 0.1313 7000 1.2556
1.2594 0.1500 8000 1.2481
1.2523 0.1688 9000 1.2415
1.244 0.1876 10000 1.2348
1.2274 0.2063 11000 1.2309
1.2167 0.2251 12000 1.2257
1.2359 0.2438 13000 1.2225
1.2156 0.2626 14000 1.2191
1.204 0.2813 15000 1.2146
1.2203 0.3001 16000 1.2109
1.2016 0.3188 17000 1.2094
1.2117 0.3376 18000 1.2057
1.2183 0.3563 19000 1.2038
1.2108 0.3751 20000 1.2005
1.2153 0.3939 21000 1.1981
1.189 0.4126 22000 1.1968
1.1857 0.4314 23000 1.1947
1.1688 0.4501 24000 1.1914
1.2028 0.4689 25000 1.1907
1.1916 0.4876 26000 1.1893
1.1797 0.5064 27000 1.1873
1.1897 0.5251 28000 1.1848
1.1817 0.5439 29000 1.1837
1.1837 0.5627 30000 1.1826
1.1889 0.5814 31000 1.1808
1.1754 0.6002 32000 1.1798
1.1868 0.6189 33000 1.1790
1.1792 0.6377 34000 1.1780
1.1772 0.6564 35000 1.1766
1.1763 0.6752 36000 1.1755
1.1719 0.6939 37000 1.1746
1.1804 0.7127 38000 1.1724
1.1763 0.7314 39000 1.1717
1.1715 0.7502 40000 1.1717
1.1732 0.7690 41000 1.1701
1.1808 0.7877 42000 1.1692
1.1713 0.8065 43000 1.1688
1.175 0.8252 44000 1.1678
1.1604 0.8440 45000 1.1668
1.1619 0.8627 46000 1.1658
1.1686 0.8815 47000 1.1650
1.1541 0.9002 48000 1.1647
1.1776 0.9190 49000 1.1641
1.1675 0.9378 50000 1.1640
1.1727 0.9565 51000 1.1636
1.1566 0.9753 52000 1.1633
1.1657 0.9940 53000 1.1632

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.1.2
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
24
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Aharneish/hindi-llama

Adapter
(1646)
this model