LoftQ commited on
Commit
58e1135
1 Parent(s): 510185b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
17
 
18
- This model, `Meta-Llama-3-8B-4bit-64rank`, is obtained from [LLAMA-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
19
  The backbone is under `LoftQ/Meta-Llama-3-70B-4bit-64rank-1iter` and LoRA adapters are under the `subfolder='loftq_init'`.
20
 
21
  ## Model Info
 
15
 
16
  LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
17
 
18
+ This model, `Meta-Llama-3-70B-4bit-64rank-1iter`, is obtained from [LLAMA-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B).
19
  The backbone is under `LoftQ/Meta-Llama-3-70B-4bit-64rank-1iter` and LoRA adapters are under the `subfolder='loftq_init'`.
20
 
21
  ## Model Info