---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-7bn-xsum-cnn-adapter
results: []
datasets:
- cnn_dailymail
- EdinburghNLP/xsum
language:
- en
library_name: adapter-transformers
---
# llama2-7bn-xsum-cnn-adapter
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on XSum and CNN/DM. LoRA adapter model based on LLama2 7bn. You can view all the implementation details on the [GitHub project](https://github.com/ernlavr/llamarizer)
## Weights and Biases Documentation: Training and Eval
See [Weights and Biases](https://wandb.ai/ernlavr/adv_nlp2023/runs/t8icitt1) for training details.
## Training procedure
- Input source document wrapped in a prompt: "Summarize the following article:\; Summary: \