Weyaxi commited on
Commit
7dca5a4
1 Parent(s): 8c3bc5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -9,11 +9,13 @@ pipeline_tag: text-generation
9
  tags:
10
  - llama-2
11
  - llama
 
 
12
  ---
13
 
14
  # Info
15
 
16
- Adapter model trained with the **QloRA** technique
17
 
18
  * 📜 Model license: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
19
  * 🏛️ Base Model: [Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)
@@ -47,5 +49,4 @@ The following `bitsandbytes` quantization config was used during training:
47
  - llm_int8_has_fp16_weight: False
48
  - bnb_4bit_quant_type: nf4
49
  - bnb_4bit_use_double_quant: True
50
- - bnb_4bit_compute_dtype: bfloat16
51
-
 
9
  tags:
10
  - llama-2
11
  - llama
12
+ - instruct
13
+ - instruction
14
  ---
15
 
16
  # Info
17
 
18
+ Adapter model trained with the [**QloRA**](https://arxiv.org/abs/2305.14314) technique
19
 
20
  * 📜 Model license: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
21
  * 🏛️ Base Model: [Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)
 
49
  - llm_int8_has_fp16_weight: False
50
  - bnb_4bit_quant_type: nf4
51
  - bnb_4bit_use_double_quant: True
52
+ - bnb_4bit_compute_dtype: bfloat16