Text Generation
Transformers
Safetensors
English
llama
Inference Endpoints
text-generation-inference
Matt commited on
Commit
fa2ab90
1 Parent(s): 5cc044e

Add chat template

Browse files
Files changed (2) hide show
  1. README.md +25 -1
  2. tokenizer_config.json +1 -0
README.md CHANGED
@@ -48,6 +48,30 @@ Your prompt here
48
  The output of Stable Beluga 7B
49
  ```
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ## Model Details
52
 
53
  * **Developed by**: [Stability AI](https://stability.ai/)
@@ -96,4 +120,4 @@ Beluga is a new technology that carries risks with use. Testing conducted to dat
96
  archivePrefix={arXiv},
97
  primaryClass={cs.CL}
98
  }
99
- ```
 
48
  The output of Stable Beluga 7B
49
  ```
50
 
51
+ This formatting is also available via a pre-defined Transformers chat template, which means that lists of messages can be formatted for you with the `apply_chat_template()` method:
52
+
53
+ ```python
54
+ chat = [
55
+ {"role": "system", "content": "This is a system prompt, please behave and help the user."},
56
+ {"role": "user", "content": "Your prompt here"},
57
+ ]
58
+ tokenizer.apply_chat_template(chat, tokenize=False)
59
+ ```
60
+
61
+ which will yield:
62
+
63
+ ```
64
+ ### System:
65
+ This is a system prompt, please behave and help the user.
66
+
67
+ ### User:
68
+ Your prompt here
69
+
70
+
71
+ ```
72
+
73
+ If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`.
74
+
75
  ## Model Details
76
 
77
  * **Developed by**: [Stability AI](https://stability.ai/)
 
120
  archivePrefix={arXiv},
121
  primaryClass={cs.CL}
122
  }
123
+ ```
tokenizer_config.json CHANGED
@@ -7,6 +7,7 @@
7
  "rstrip": false,
8
  "single_word": false
9
  },
 
10
  "clean_up_tokenization_spaces": false,
11
  "eos_token": {
12
  "__type": "AddedToken",
 
7
  "rstrip": false,
8
  "single_word": false
9
  },
10
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{ '### ' + message['role'].title() + ':\n' + message['content'] + '\n\n' }}{% endfor %}{% if add_generation_prompt %}{{ '###Assistant:\n' }}{% endif %}",
11
  "clean_up_tokenization_spaces": false,
12
  "eos_token": {
13
  "__type": "AddedToken",