--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: Llama2-7bn-xsum-adapter results: [] datasets: - EdinburghNLP/xsum language: - en pipeline_tag: summarization metrics: - rouge --- # Llama2-7bn-xsum-adapter Weights & Biases runs for training and evaluation are available for a detailed overview! This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on a [XSum](https://huggingface.co/datasets/EdinburghNLP/xsum) dataset with Causal LM task. You can view all the implementation details on the [GitHub project](https://github.com/ernlavr/llamarizer) ## Weights & Biases Training and Evaluation Documentation See the [training](https://wandb.ai/ernlavr/adv_nlp2023/runs/yk6ytvv2) and [evaluation](https://wandb.ai/ernlavr/adv_nlp2023/runs/f41oo2c6?workspace=user-ernestslavrinovics) on Weights & Biases for more details! Summary table of final metrics: | Metric | rouge1 | rouge2 | rougeL | FactCC | ANLI | SummaC | BARTScore | |------------------------|---------|---------|---------|---------|--------|---------|------------| | Mean | 0.18 | 0.033 | 0.126 | 0.188 | 0.408 | 0.658 | -3.713 | | Std | 0.09 | 0.049 | 0.067 | 0.317 | 0.462 | 0.247 | 0.831 | ## Training procedure Causal language modeling. Nesting the summary paragraph in a prompt: {Summarize this article: ''; Summary: } ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 450.5 - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.1 - Datasets 2.14.6 - Tokenizers 0.14.1