SwastikM commited on
Commit
0122800
1 Parent(s): 26e55fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -43,8 +43,9 @@ Addressing the efficay of Quantization and PEFT. Implemented as a personal Proje
43
 
44
  ```
45
  The quantized model is finetuned as PEFT. We have the trained Adapter.
46
- Merging LoRA adapated with GPTQ quantized model is not yet supported.
47
- So instead of loading a single finetuned model, we need to load the mase model and merge the finetuned adapter on top.
 
48
  ```
49
 
50
  ```python
 
43
 
44
  ```
45
  The quantized model is finetuned as PEFT. We have the trained Adapter.
46
+ Merging LoRA adapater with GPTQ quantized model is not yet supported.
47
+ So instead of loading a single finetuned model, we need to load the base
48
+ model and merge the finetuned adapter on top.
49
  ```
50
 
51
  ```python