Mike0307 commited on
Commit
735cb24
1 Parent(s): 4e7a567

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -5
README.md CHANGED
@@ -12,7 +12,7 @@ pipeline_tag: text-generation
12
  ---
13
 
14
 
15
- ## Download the model
16
 
17
  The base-model [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) currently relies on
18
  the latest dev-version transformers and torch.<br>
@@ -43,11 +43,15 @@ model = AutoModelForCausalLM.from_pretrained(
43
  tokenizer = AutoTokenizer.from_pretrained(model_id)
44
  ```
45
 
46
- ## Example of inference
47
 
48
  ```python
49
  input_text = "<|user|>將這五種動物分成兩組。\n老虎、鯊魚、大象、鯨魚、袋鼠 <|end|>\n<|assistant|>"
50
- inputs = tokenizer(input_text, return_tensors="pt").to(torch.device("mps")) # FIX mps if not MacOS
 
 
 
 
51
 
52
  outputs = model.generate(
53
  **inputs,
@@ -56,6 +60,37 @@ outputs = model.generate(
56
  do_sample = False
57
  )
58
 
59
- generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  print(generated_text)
61
- ```
 
12
  ---
13
 
14
 
15
+ ## Download Model
16
 
17
  The base-model [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) currently relies on
18
  the latest dev-version transformers and torch.<br>
 
43
  tokenizer = AutoTokenizer.from_pretrained(model_id)
44
  ```
45
 
46
+ ## Inference Example
47
 
48
  ```python
49
  input_text = "<|user|>將這五種動物分成兩組。\n老虎、鯊魚、大象、鯨魚、袋鼠 <|end|>\n<|assistant|>"
50
+
51
+ inputs = tokenizer(
52
+ input_text,
53
+ return_tensors="pt"
54
+ ).to(torch.device("mps")) # Change mps if not MacOS
55
 
56
  outputs = model.generate(
57
  **inputs,
 
60
  do_sample = False
61
  )
62
 
63
+ generated_text = tokenizer.decode(
64
+ outputs[0],
65
+ skip_special_tokens=True
66
+ )
67
+ print(generated_text)
68
+ ```
69
+
70
+
71
+ ## Streaming Example
72
+ ```python
73
+ from transformers import TextStreamer
74
+ streamer = TextStreamer(tokenizer)
75
+
76
+ input_text = "<|user|>將這五種動物分成兩組。\n老虎、鯊魚、大象、鯨魚、袋鼠 <|end|>\n<|assistant|>"
77
+
78
+ inputs = tokenizer(
79
+ input_text,
80
+ return_tensors="pt"
81
+ ).to(torch.device("mps")) # Change mps if not MacOS
82
+
83
+ outputs = model.generate(
84
+ **inputs,
85
+ temperature = 0.0,
86
+ do_sample = False,
87
+ streamer=streamer,
88
+ max_new_tokens=20,
89
+ )
90
+
91
+ generated_text = tokenizer.decode(
92
+ outputs[0],
93
+ skip_special_tokens=True
94
+ )
95
  print(generated_text)
96
+ ```