Spaces:
Build error
Build error
File size: 7,922 Bytes
ab13cee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 |
dataset: cosmos_qa
templates:
015f333d-2a15-4552-9fe3-a20bd781001e: !Template
answer_choices: null
id: 015f333d-2a15-4552-9fe3-a20bd781001e
jinja: "Based on the context and the answer, generate a question. \n\nContext:\
\ {{context}}\n\nAnswer:\n{% if label == 0 %}\n{{answer0}}\n{% elif label ==\
\ 1 %}\n{{answer1}}\n{% elif label == 2 %}\n{{answer2}}\n{% elif label == 3\
\ %}\n{{answer3}}\n{% endif %}\n|||\n{{question}}"
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- BLEU
- ROUGE
original_task: false
name: context_answer_to_question
reference: 'Template asks the model to generate questions '
08e20b79-d1c0-4717-b538-f1a313c2b7d2: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 08e20b79-d1c0-4717-b538-f1a313c2b7d2
jinja: "Read the following context and choose the best option to answer the question.\n\
Context: {{ context }}\nQuestion: {{ question }}\nOptions: \n- {{ answer_choices\
\ | join(\"\\n - \") }}\n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: description_context_question_answer_text
reference: 'Template generates the answer. Answer cues are included. '
67d6ba13-4958-4e5e-842c-ada92aead6cc: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 67d6ba13-4958-4e5e-842c-ada92aead6cc
jinja: 'Read the following context and answer the question.
Context: {{ context }}
Question: {{ question }}
Answer:
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: true
name: description_context_question_text
reference: Template generates the answer
693c47c6-f17c-417a-af70-bc20e71b4ed4: !Template
answer_choices: A ||| B ||| C ||| D
id: 693c47c6-f17c-417a-af70-bc20e71b4ed4
jinja: "Read the following context and choose the best option to answer the question.\n\
Context: {{ context }}\nQuestion: {{ question }}\nOptions: \nA. {{ answer0 }}\n\
B. {{ answer1 }}\nC. {{ answer2 }}\nD. {{ answer3 }}\n|||\n{{ answer_choices[label]\
\ }}"
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: description_context_question_answer_id
reference: Template asks the model to pick the correct answer
6b9a24cc-054e-40d6-8abf-261443122f3a: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 6b9a24cc-054e-40d6-8abf-261443122f3a
jinja: '{{ context }}
According to the above context, choose the best option to answer the following
question.
Question: {{ question }}
Options:
- {{answer_choices | join("\n - ")}}
|||
{{answer_choices[label]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_description_question_answer_text
reference: The template asks the model to generate the answer
71325300-1f16-4a68-97c7-a03457f00cc7: !Template
answer_choices: A ||| B ||| C ||| D
id: 71325300-1f16-4a68-97c7-a03457f00cc7
jinja: '{{ context }}
{{ question }}
A. {{ answer0 }}
B. {{ answer1 }}
C. {{ answer2 }}
D. {{ answer3 }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: no_prompt_id
reference: 'No prompt with context and question. '
7c30b1a1-14da-4458-95e8-c35f8de23110: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 7c30b1a1-14da-4458-95e8-c35f8de23110
jinja: '{{ context }}
Question: {{ question }}
The answer to the above question:
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: false
name: context_question_description_text
reference: Context, question, task description, and generate the answer
85e9ae2c-fbb7-47ed-980c-56da5299e9af: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: 85e9ae2c-fbb7-47ed-980c-56da5299e9af
jinja: '{{ context }}
{{ question }}
- {{ answer_choices | join("\n - ") }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: no_prompt_text
reference: 'No prompt with answer choices. The template asks the model to generate
the answer. '
8a60255c-d44d-4f20-a631-ae1c0c9a7d68: !Template
answer_choices: A ||| B ||| C ||| D
id: 8a60255c-d44d-4f20-a631-ae1c0c9a7d68
jinja: '{{ context }}
According to the above context, choose the best option to answer the following
question.
Question: {{ question }}
Options:
A. {{ answer0 }}
B. {{ answer1 }}
C. {{ answer2 }}
D. {{ answer3 }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_description_question_answer_id
reference: Original task with context, question and the answer choices.
9dc80101-516d-448e-8e05-a62b4acead3b: !Template
answer_choices: A ||| B ||| C ||| D
id: 9dc80101-516d-448e-8e05-a62b4acead3b
jinja: '{{ context }}
{{ question }}
Pick the best answer from the following options:
A. {{ answer0 }}
B. {{ answer1 }}
C. {{ answer2 }}
D. {{ answer3 }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_question_description_answer_id
reference: Template asks the model to pick the correct answer
c07c459e-f1f7-409e-9da7-fe5c993a4933: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: c07c459e-f1f7-409e-9da7-fe5c993a4933
jinja: '{{ context }}
According to the above context, answer the following question.
{{ question }}
|||
{{answer_choices[label]}}'
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: true
name: context_description_question_text
reference: The template asks the model to generate the answer without any answer
cues
d5499348-5cb3-467b-a543-206b5dd9806e: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: d5499348-5cb3-467b-a543-206b5dd9806e
jinja: '{{ context }}
{{ question }}
Pick the best answer from the following options:
- {{ answer_choices | join("\n - ") }}
|||
{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
metrics:
- Accuracy
original_task: true
name: context_question_description_answer_text
reference: 'Context, question, task description, and answer choices '
e640e365-091c-491e-a87e-f529514607e5: !Template
answer_choices: '{{answer0}} ||| {{answer1}} ||| {{answer2}} ||| {{answer3}}'
id: e640e365-091c-491e-a87e-f529514607e5
jinja: "{{question}} \n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: false
name: only_question_answer
reference: Template with only question and generates the answer
|