Text Generation
Transformers
Safetensors
mistral
conversational
Inference Endpoints
text-generation-inference
jondurbin commited on
Commit
3f8d96a
1 Parent(s): 3f8462e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -207,6 +207,7 @@ print(tokenizer.apply_chat_template(chat, tokenize=False))
207
  ```
208
 
209
  The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
 
210
 
211
  <details>
212
  <summary>Vicuna</summary>
 
207
  ```
208
 
209
  The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
210
+ </details>
211
 
212
  <details>
213
  <summary>Vicuna</summary>