Text Generation
Transformers
Safetensors
mistral
conversational
Inference Endpoints
text-generation-inference
jondurbin commited on
Commit
a674dcd
1 Parent(s): ef49a63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -229,7 +229,7 @@ print(tokenizer.apply_chat_template(chat, tokenize=False))
229
  ```
230
  </details>
231
 
232
- ## Helpful usage tips
233
 
234
  <details>
235
  <summary>
@@ -624,4 +624,20 @@ print(tokenizer.apply_chat_template(chat, tokenize=False))
624
  ```
625
 
626
  In other words, write the first chapter, then use a summarization prompt for it, then include the summary in the next chapter's prompt.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
627
  </details>
 
229
  ```
230
  </details>
231
 
232
+ ## Prompting strategies
233
 
234
  <details>
235
  <summary>
 
624
  ```
625
 
626
  In other words, write the first chapter, then use a summarization prompt for it, then include the summary in the next chapter's prompt.
627
+ </details>
628
+
629
+ <details>
630
+ <summary>
631
+ <b>Boolean questions</b>
632
+ <br>
633
+ For content filtering and other use-cases which only require a true/false response.
634
+ </summary>
635
+
636
+ The prompts in the fine-tuning dataset are formatted as follows:
637
+
638
+ ```text
639
+ True or false - {statement}
640
+ ```
641
+
642
+ The model will then, theoretically, respond with only a single word.
643
  </details>