Kquant03 commited on
Commit
ca395a6
1 Parent(s): 05a0d09

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -1
README.md CHANGED
@@ -13,4 +13,68 @@ thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088
13
 
14
  The passthrough method differs significantly from the previous ones. By concatenating layers from different LLMs, it can produce models with an exotic number of parameters (e.g., 9B with two 7B parameter models). These models are often referred to as "frankenmerges" or "Frankenstein models" by the community.
15
 
16
- Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here]...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for helping me figure out how to make the model as big as I possibly could.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  The passthrough method differs significantly from the previous ones. By concatenating layers from different LLMs, it can produce models with an exotic number of parameters (e.g., 9B with two 7B parameter models). These models are often referred to as "frankenmerges" or "Frankenstein models" by the community.
15
 
16
+ Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here]...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for helping me figure out how to make the model as big as I possibly could.
17
+
18
+ # How to run inference:
19
+
20
+ ```python
21
+ import transformers
22
+ import torch
23
+
24
+ if __name__ == "__main__":
25
+ model_name = "abacaj/phi-2-super"
26
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
27
+
28
+ model = (
29
+ transformers.AutoModelForCausalLM.from_pretrained(
30
+ model_name,
31
+ )
32
+ .to("cuda:0")
33
+ .eval()
34
+ )
35
+
36
+ messages = [
37
+ {"role": "user", "content": "Hello, who are you?"}
38
+ ]
39
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
40
+ input_ids_cutoff = inputs.size(dim=1)
41
+
42
+ with torch.no_grad():
43
+ generated_ids = model.generate(
44
+ input_ids=inputs,
45
+ use_cache=True,
46
+ max_new_tokens=512,
47
+ temperature=0.2,
48
+ top_p=0.95,
49
+ do_sample=True,
50
+ eos_token_id=tokenizer.eos_token_id,
51
+ pad_token_id=tokenizer.pad_token_id,
52
+ )
53
+
54
+ completion = tokenizer.decode(
55
+ generated_ids[0][input_ids_cutoff:],
56
+ skip_special_tokens=True,
57
+ )
58
+
59
+ print(completion)
60
+ ```
61
+
62
+ # Chat template
63
+
64
+ The model uses the same chat template as found in Mistral instruct models:
65
+
66
+ ```python
67
+ text = "<|endoftext|>[INST] What is your favourite condiment? [/INST]"
68
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!<|endoftext|> "
69
+ "[INST] Do you have mayonnaise recipes? [/INST]"
70
+ ```
71
+
72
+ You don't need to do it manually if you use the HF transformers tokenizer:
73
+
74
+ ```python
75
+ messages = [
76
+ {"role": "user", "content": "Hello, who are you?"},
77
+ {"role": "assistant": "content": "I am ..."}
78
+ ]
79
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
80
+ ```