llama2-7b-english-to-hinglish-translation
This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7508
- Rouge Scores: {'rouge1': 0.9207934134490793, 'rouge2': 0.8268216875143521, 'rougeL': 0.863418556340243, 'rougeLsum': 0.9207165318568765}
- Bleu Scores: [0.9430535279899742, 0.9289517504059885, 0.9111307023404618, 0.8922236591496603]
- Gen Len: 2048.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
---|---|---|---|---|---|---|
0.8283 | 1.0 | 500 | 0.7644 | {'rouge1': 0.921717672607307, 'rouge2': 0.8269254584175559, 'rougeL': 0.8617480706939217, 'rougeLsum': 0.9216499826848323} | [0.9428124451183093, 0.9288838577090098, 0.910999858543974, 0.8919623155075178] | 2048.0 |
0.5824 | 2.0 | 1000 | 0.7508 | {'rouge1': 0.9207934134490793, 'rouge2': 0.8268216875143521, 'rougeL': 0.863418556340243, 'rougeLsum': 0.9207165318568765} | [0.9430535279899742, 0.9289517504059885, 0.9111307023404618, 0.8922236591496603] | 2048.0 |
Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
- Downloads last month
- 1
Model tree for DrishtiSharma/llama2-7b-english-to-hinglish-translation
Base model
NousResearch/Llama-2-7b-hf