Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
TheBloke commited on
Commit
955e990
1 Parent(s): 21339df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -1,6 +1,12 @@
1
  ---
2
  inference: false
3
  license: other
 
 
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -29,6 +35,30 @@ It is the result of merging and/or converting the source repository to float16.
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  <!-- footer start -->
33
  ## Discord
34
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ datasets:
5
+ - databricks/databricks-dolly-15k
6
+ - OpenAssistant/oasst1
7
+ - sahil2801/CodeAlpaca-20k
8
+ language:
9
+ - en
10
  ---
11
 
12
  <!-- header start -->
 
35
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
36
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
37
 
38
+ ## Prompt template
39
+
40
+ According to the original model's README, the following template should be used:
41
+
42
+ ```
43
+ <|user|>
44
+ prompt goes here
45
+ <|assistant|>
46
+ ```
47
+
48
+ However in my own testing, this seems to return no response at all. But I do get good responses using:
49
+
50
+ ```
51
+ ### Instruction: prompt goes here
52
+ ### Response:
53
+ ```
54
+
55
+ and
56
+
57
+ ```
58
+ USER: prompt goes here
59
+ ASSISTANT:
60
+ ```
61
+
62
  <!-- footer start -->
63
  ## Discord
64