lodrick-the-lafted commited on
Commit
1a5e764
1 Parent(s): c1a9a15

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - lodrick-the-lafted/Hermes-100K
5
+ ---
6
+
7
+ <img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-100K/resolve/main/hermes-instruct.png>
8
+
9
+ # Hermes-Instruct-7B-v0.2
10
+
11
+ [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with some of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format.
12
+
13
+ <br />
14
+ <br />
15
+
16
+ # Prompt Format
17
+
18
+ Both the default Mistral-Instruct tags and Alpaca are fine, so either:
19
+ ```
20
+ <s>[INST] {sys_prompt} {instruction} [/INST]
21
+ ```
22
+
23
+ ```
24
+ {sys_prompt}
25
+
26
+ ### Instruction:
27
+ {instruction}
28
+
29
+ ### Response:
30
+
31
+ ```
32
+ The tokenizer defaults to Mistral-style.
33
+
34
+ <br />
35
+ <br />
36
+
37
+ # Usage
38
+
39
+ ```python
40
+ from transformers import AutoTokenizer
41
+ import transformers
42
+ import torch
43
+
44
+ model = "lodrick-the-lafted/Hermes-Instruct-7B-100K"
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained(model)
47
+ pipeline = transformers.pipeline(
48
+ "text-generation",
49
+ model=model,
50
+ model_kwargs={"torch_dtype": torch.bfloat16},
51
+ )
52
+
53
+ messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
54
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
55
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
56
+ print(outputs[0]["generated_text"])
57
+ ```
58
+