Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,10 @@ The dataset was tokenized and fed to the model as a conversation between two spe
|
|
14 |
|
15 |
`script_speaker_name` = `person alpha`
|
16 |
|
17 |
-
`script_responder_name` = `person beta`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
`script_speaker_name` = `person alpha`
|
16 |
|
17 |
+
`script_responder_name` = `person beta`
|
18 |
+
|
19 |
+
|
20 |
+
## examples
|
21 |
+
|
22 |
+
- the default inference API examples should work _okay_
|
23 |
+
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
|