Text Generation
Transformers
PyTorch
mpt
OpenAccess AI Collective
MPT
axolotl
custom_code
text-generation-inference
6 papers
winglian commited on
Commit
7b8728f
1 Parent(s): ba36bcb

pipeline tools for testing

Browse files
examples/pipeline.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ from transformers import AutoTokenizer, AutoModelForCausalLM
3
+ import transformers
4
+ import torch
5
+
6
+ model = "openaccess-ai-collective/minotaur-mpt-7b"
7
+
8
+ tokenizer = AutoTokenizer.from_pretrained(model)
9
+ pipeline = transformers.pipeline(
10
+ "text-generation",
11
+ model=model,
12
+ tokenizer=tokenizer,
13
+ torch_dtype=torch.bfloat16,
14
+ trust_remote_code=True,
15
+ device_map="auto",
16
+ )
17
+
18
+ prompt = "".join([l for l in sys.stdin]).strip()
19
+
20
+ sequences = pipeline(
21
+ prompt,
22
+ max_length=2048,
23
+ do_sample=True,
24
+ top_k=40,
25
+ top_p=0.95,
26
+ temperature=1.0,
27
+ num_beams=10,
28
+ num_return_sequences=1,
29
+ eos_token_id=tokenizer.eos_token_id,
30
+ )
31
+ for seq in sequences:
32
+ print(f"Result: {seq['generated_text']}")
examples/prompt.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ USER: your have 3 apples. you eat 2 pears. how many apples do you have left?
2
+
3
+ ASSISTANT:
examples/prompt2.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ USER: What are 3 words that start with "ex" and end in "g"? What's the sum of 33 and 77?
2
+
3
+ ASSISTANT: