FrancescoPeriti commited on
Commit
79fe4a6
1 Parent(s): e690c95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -19,3 +19,23 @@ The following `bitsandbytes` quantization config was used during training:
19
 
20
 
21
  - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
 
21
  - PEFT 0.5.0
22
+
23
+ ## Get it started
24
+ ```python
25
+ from peft import PeftModel, PeftConfig
26
+ from huggingface_hub import login
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ login("[YOUR HF TOKEN HERE FOR USING LLAMA]")
30
+ config = PeftConfig.from_pretrained("ChangeIsKey/llama-7b-lexical-substitution")
31
+ base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
32
+
33
+ tokenizer = AutoTokenizer.from_pretrained(base_model_name, use_fast=False, trust_remote_code=True, cache_dir=LLMs_CACHE_DIR)
34
+ tokenizer.add_special_tokens({ "additional_special_tokens":[AddedToken("<|s|>"), AddedToken("<|answer|>"), AddedToken("<|end|>")]})
35
+ if tokenizer.pad_token is None:
36
+ tokenizer.add_special_tokens({'pad_token': '[PAD]'})
37
+ tokenizer.padding_side = 'left'
38
+ base_model.resize_token_embeddings(len(tokenizer))
39
+
40
+ model = PeftModel.from_pretrained(base_model, "ChangeIsKey/llama-7b-lexical-substitution")
41
+ ```