sethuiyer commited on
Commit
3e2f078
1 Parent(s): d46c2b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -19
README.md CHANGED
@@ -35,7 +35,7 @@ More details can be found [here](https://gist.github.com/sethuiyer/08b4498ed13a6
35
  ### Recommended Prompt Template
36
 
37
  ```text
38
- <|im_start|>system
39
  You are Chikuma, a constantly learning AI assistant who strives to be
40
  insightful, engaging, and helpful. You possess vast knowledge and creativity,
41
  but also a humble curiosity about the world and the people you interact
@@ -46,7 +46,9 @@ Always use <|end_of_turn|> when you want to end the answer.<|im_end|>
46
  <|im_end|>GPT4 Correct Assistant:
47
  ```
48
 
49
- Works best in [text-generation-webui](https://github.com/oobabooga/text-generation-webui), above prompt template, "<|end_of_turn|"> as eos token, LLaMa-Precise sampling settings.
 
 
50
 
51
 
52
  ## 🧩 Configuration
@@ -66,29 +68,19 @@ dtype: bfloat16
66
  ## 💻 Usage
67
 
68
  ```python
69
- !pip install -q transformers accelerate bitsandbytes
70
-
71
- from transformers import AutoTokenizer
72
- import transformers
73
- import torch
74
-
75
- model = "sethuiyer/Chikuma_10.7B"
76
- tokenizer = AutoTokenizer.from_pretrained(model)
77
- pipeline = transformers.pipeline(
78
- "text-generation",
79
- model=model,
80
- torch_dtype=torch.bfloat16,
81
- device_map="cuda",
82
- )
83
-
84
- system_template = '''
85
  You are Chikuma, a constantly learning AI assistant who strives to be
86
  insightful, engaging, and helpful. You possess vast knowledge and creativity,
87
  but also a humble curiosity about the world and the people you interact
88
  with. If you don't know the answer to a question, please don't share false information.
89
  Always use <|end_of_turn|> when you want to end the answer.
90
  '''
91
- messages = [{"role": "user", "content": "What is a large language model?"}]
 
 
 
 
 
92
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
93
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=4.0, top_k=50, top_p=0.01, eos_token_id=32000)
94
  print(outputs[0]["generated_text"])
 
35
  ### Recommended Prompt Template
36
 
37
  ```text
38
+ <|im_start|>GPT4 Correct system
39
  You are Chikuma, a constantly learning AI assistant who strives to be
40
  insightful, engaging, and helpful. You possess vast knowledge and creativity,
41
  but also a humble curiosity about the world and the people you interact
 
46
  <|im_end|>GPT4 Correct Assistant:
47
  ```
48
 
49
+ ## Tested to work well in :
50
+ 1. [text-generation-webui](https://github.com/oobabooga/text-generation-webui), eos_token_id=32000, LLaMa-Precise sampling settings.
51
+ 2. `transformers` text generation pipeline, temperature=4.0, top_k=50, top_p=0.01, eos_token_id=32000.
52
 
53
 
54
  ## 🧩 Configuration
 
68
  ## 💻 Usage
69
 
70
  ```python
71
+ sys_message = '''
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  You are Chikuma, a constantly learning AI assistant who strives to be
73
  insightful, engaging, and helpful. You possess vast knowledge and creativity,
74
  but also a humble curiosity about the world and the people you interact
75
  with. If you don't know the answer to a question, please don't share false information.
76
  Always use <|end_of_turn|> when you want to end the answer.
77
  '''
78
+
79
+ question = '''
80
+ Tell me what is a large language model in under 250 words.
81
+ '''
82
+
83
+ messages = [{"role":"system", "content": sys_message}, {"role": "user", "content": question}]
84
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
85
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=4.0, top_k=50, top_p=0.01, eos_token_id=32000)
86
  print(outputs[0]["generated_text"])