English
TheBloke commited on
Commit
dd34949
1 Parent(s): 90c3676

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -1,6 +1,12 @@
1
  ---
2
  inference: false
3
  license: other
 
 
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -34,6 +40,30 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
34
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-13B-GGML)
35
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-13B-fp16)
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  <!-- compatibility_ggml start -->
38
  ## Compatibility
39
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ datasets:
5
+ - databricks/databricks-dolly-15k
6
+ - OpenAssistant/oasst1
7
+ - sahil2801/CodeAlpaca-20k
8
+ language:
9
+ - en
10
  ---
11
 
12
  <!-- header start -->
 
40
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-13B-GGML)
41
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-13B-fp16)
42
 
43
+ ## Prompt template
44
+
45
+ According to the original model's README, the following template should be used:
46
+
47
+ ```
48
+ <|user|>
49
+ prompt goes here
50
+ <|assistant|>
51
+ ```
52
+
53
+ However in my own testing, this seems to return no response at all. But I do get good responses using:
54
+
55
+ ```
56
+ ### Instruction: prompt goes here
57
+ ### Response:
58
+ ```
59
+
60
+ and
61
+
62
+ ```
63
+ USER: prompt goes here
64
+ ASSISTANT:
65
+ ```
66
+
67
  <!-- compatibility_ggml start -->
68
  ## Compatibility
69