Text Generation
Adapters
llama
llama-2
calpt commited on
Commit
3e9863a
1 Parent(s): fd12c98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -95,7 +95,7 @@ model.merge_adapter(adapter_name)
95
 
96
  ## Architecture & Training
97
 
98
- **Training was run with the code in [this notebook](https://github.com/adapter-hub/adapters/blob/main/notebooks/QLoRA_Llama2_Finetuning.ipynb)**.
99
 
100
  The LoRA architecture closely follows the configuration described in the [QLoRA paper](https://arxiv.org/pdf/2305.14314.pdf):
101
  - `r=64`, `alpha=16`
 
95
 
96
  ## Architecture & Training
97
 
98
+ **Training was run with the code in [this notebook](https://github.com/adapter-hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb)**.
99
 
100
  The LoRA architecture closely follows the configuration described in the [QLoRA paper](https://arxiv.org/pdf/2305.14314.pdf):
101
  - `r=64`, `alpha=16`