TroyDoesAI commited on
Commit
cf4d35f
1 Parent(s): fc9b3ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ ---
6
+ ### Basic Context Obedient Prompt that works great for RAG
7
+
8
+ ```
9
+ Contextual-Request:
10
+ BEGININPUT
11
+ BEGINCONTEXT
12
+ date: 2024-05-03
13
+ url: https://web.site.thisshitsbadouthereboys/123
14
+ ENDCONTEXT
15
+ Pandemic Warning Notice there has been a huge issue with Zombie humans that are passing on a new disease that appeared to be similar to the symptoms of covid but when a host dies they reanimate as a zombie corpse.
16
+ ENDINPUT
17
+ BEGININSTRUCTION
18
+ What is the the pandemic about? cite your sources
19
+ ENDINSTRUCTION
20
+
21
+ ### Contextual Response:
22
+ ```
23
+ ---
24
+
25
+ Overview
26
+ This model is meant to enhance adherence to provided context (e.g., for RAG applications) and reduce hallucinations, inspired by airoboros context-obedient question answer format.
27
+
28
+ ## Overview
29
+
30
+ The format for a contextual prompt is as follows:
31
+ ```
32
+ Contextual-Request:
33
+ BEGININPUT
34
+ BEGINCONTEXT
35
+ [key0: value0]
36
+ [key1: value1]
37
+ ... other metdata ...
38
+ ENDCONTEXT
39
+ [insert your text blocks here]
40
+ ENDINPUT
41
+ [add as many other blocks, in the exact same format]
42
+ BEGININSTRUCTION
43
+ [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
44
+ ENDINSTRUCTION
45
+ ```
46
+
47
+ I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
48
+ - `Contextual-Request:` - denotes the type of request pattern the model is to follow for consistency
49
+ - `BEGININPUT` - denotes a new input block
50
+ - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
51
+ - `ENDCONTEXT` - denotes the end of the metadata block for the current input
52
+ - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
53
+ - `ENDINPUT` - denotes the end of the current input block
54
+ - [repeat as many input blocks in this format as you want]
55
+ - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
56
+ - [instruction(s)]
57
+ - `ENDINSTRUCTION` - denotes the end of instruction set
58
+
59
+ Here's a trivial, but important example to prove the point:
60
+ ```
61
+ Contextual-Request:
62
+ BEGININPUT
63
+ BEGINCONTEXT
64
+ date: 2021-01-01
65
+ url: https://web.site/123
66
+ ENDCONTEXT
67
+ In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
68
+ ENDINPUT
69
+ BEGININSTRUCTION
70
+ What color are bluberries? Source?
71
+ ENDINSTRUCTION
72
+ ```
73
+
74
+ And the expected response:
75
+ ```
76
+ ### Contextual Response:
77
+ Blueberries are now green.
78
+ Source:
79
+ date: 2021-01-01
80
+ url: https://web.site/123
81
+ ```
82
+
83
+ ### References in response
84
+
85
+ As shown in the example, the dataset includes many examples of including source details in the response, when the question asks for source/citation/references.
86
+
87
+ Why do this? Well, the R in RAG seems to be the weakest link in the chain.
88
+ Retrieval accuracy, depending on many factors including the overall dataset size, can be quite low.
89
+ This accuracy increases when retrieving more documents, but then you have the issue of actually using
90
+ the retrieved documents in prompts. If you use one prompt per document (or document chunk), you know
91
+ exactly which document the answer came from, so there's no issue. If, however, you include multiple
92
+ chunks in a single prompt, it's useful to include the specific reference chunk(s) used to generate the
93
+ response, rather than naively including references to all of the chunks included in the prompt.
94
+
95
+ For example, suppose I have two documents:
96
+ ```
97
+ url: http://foo.bar/1
98
+ Strawberries are tasty.
99
+
100
+ url: http://bar.foo/2
101
+ The cat is blue.
102
+ ```
103
+
104
+ If the question being asked is `What color is the cat?`, I would only expect the 2nd document to be referenced in the response, as the other link is irrelevant.