Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference
TheBloke commited on
Commit
8a02668
1 Parent(s): e722678

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -10
README.md CHANGED
@@ -37,26 +37,21 @@ It is the result of merging and/or converting the source repository to float16.
37
 
38
  ## Prompt template
39
 
40
- According to the original model's README, the following template should be used:
41
 
42
  ```
43
  <|user|>
44
  prompt goes here
45
  <|assistant|>
46
- ```
47
-
48
- However in my own testing, this seems to return no response at all. But I do get good responses using:
49
 
50
  ```
51
- ### Instruction: prompt goes here
52
- ### Response:
53
- ```
54
 
55
- and
 
 
56
 
57
  ```
58
- USER: prompt goes here
59
- ASSISTANT:
60
  ```
61
 
62
  <!-- footer start -->
 
37
 
38
  ## Prompt template
39
 
40
+ The following template should be used:
41
 
42
  ```
43
  <|user|>
44
  prompt goes here
45
  <|assistant|>
 
 
 
46
 
47
  ```
 
 
 
48
 
49
+ **Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly.
50
+
51
+ In other words, the prompt is:
52
 
53
  ```
54
+ <|user|>\nprompt goes here\n<|assistant|>\n
 
55
  ```
56
 
57
  <!-- footer start -->