versae commited on
Commit
52b779d
1 Parent(s): 32ea88b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ model = LLaMAForCausalLM.from_pretrained(
28
  model = PeftModel.from_pretrained(model, "bertin-project/bertin-alpaca-lora-7b")
29
  ```
30
 
31
- Until `PEFT` is fully supported in Hugginface's pipelines, for generation we can either consolidate the LoRA weights into the LLaMA model weights, or use the adapter's `generate()` method. Remember that the promtp still needs the English template:
32
 
33
  ```python
34
  # Generate responses
 
28
  model = PeftModel.from_pretrained(model, "bertin-project/bertin-alpaca-lora-7b")
29
  ```
30
 
31
+ Until `PEFT` is fully supported in Hugginface's pipelines, for generation we can either consolidate the LoRA weights into the LLaMA model weights, or use the adapter's `generate()` method. Remember that the prompt still needs the English template:
32
 
33
  ```python
34
  # Generate responses