VictorSanh HF staff commited on
Commit
3adea03
1 Parent(s): ffd2e16

initial push

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. API_DOCUMENTATION.md +40 -0
  3. CITATION.cff +118 -0
  4. CODEOWNERS +1 -0
  5. CONTRIBUTING.md +321 -0
  6. LICENSE +201 -0
  7. Makefile +16 -0
  8. README.md +7 -3
  9. README_promptsource.md +140 -0
  10. assets/PromptSource ACL Demo Figure.png +3 -0
  11. assets/promptsource_app.png +0 -0
  12. promptsource/__init__.py +4 -0
  13. promptsource/app.py +663 -0
  14. promptsource/session.py +89 -0
  15. promptsource/templates.py +731 -0
  16. promptsource/templates/Zaid/coqa_expanded/templates.yaml +130 -0
  17. promptsource/templates/Zaid/quac_expanded/templates.yaml +91 -0
  18. promptsource/templates/acronym_identification/templates.yaml +248 -0
  19. promptsource/templates/ade_corpus_v2/Ade_corpus_v2_classification/templates.yaml +50 -0
  20. promptsource/templates/ade_corpus_v2/Ade_corpus_v2_drug_ade_relation/templates.yaml +125 -0
  21. promptsource/templates/ade_corpus_v2/Ade_corpus_v2_drug_dosage_relation/templates.yaml +114 -0
  22. promptsource/templates/adversarial_qa/adversarialQA/templates.yaml +120 -0
  23. promptsource/templates/adversarial_qa/dbert/templates.yaml +120 -0
  24. promptsource/templates/adversarial_qa/dbidaf/templates.yaml +120 -0
  25. promptsource/templates/adversarial_qa/droberta/templates.yaml +120 -0
  26. promptsource/templates/aeslc/templates.yaml +163 -0
  27. promptsource/templates/ag_news/templates.yaml +108 -0
  28. promptsource/templates/ai2_arc/ARC-Challenge/templates.yaml +142 -0
  29. promptsource/templates/ai2_arc/ARC-Easy/templates.yaml +142 -0
  30. promptsource/templates/amazon_polarity/templates.yaml +192 -0
  31. promptsource/templates/amazon_reviews_multi/en/templates.yaml +147 -0
  32. promptsource/templates/amazon_us_reviews/Wireless_v1_00/templates.yaml +79 -0
  33. promptsource/templates/ambig_qa/light/templates.yaml +128 -0
  34. promptsource/templates/anli/templates.yaml +221 -0
  35. promptsource/templates/app_reviews/templates.yaml +78 -0
  36. promptsource/templates/aqua_rat/raw/templates.yaml +131 -0
  37. promptsource/templates/art/templates.yaml +133 -0
  38. promptsource/templates/asnq/templates.yaml +211 -0
  39. promptsource/templates/asset/ratings/templates.yaml +119 -0
  40. promptsource/templates/asset/simplification/templates.yaml +168 -0
  41. promptsource/templates/banking77/templates.yaml +288 -0
  42. promptsource/templates/billsum/templates.yaml +153 -0
  43. promptsource/templates/bing_coronavirus_query_set/templates.yaml +77 -0
  44. promptsource/templates/biosses/templates.yaml +186 -0
  45. promptsource/templates/blbooksgenre/title_genre_classifiction/templates.yaml +63 -0
  46. promptsource/templates/blended_skill_talk/templates.yaml +57 -0
  47. promptsource/templates/cbt/CN/templates.yaml +147 -0
  48. promptsource/templates/cbt/NE/templates.yaml +147 -0
  49. promptsource/templates/cbt/P/templates.yaml +147 -0
  50. promptsource/templates/cbt/V/templates.yaml +147 -0
.gitattributes CHANGED
@@ -29,3 +29,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
  *.zip filter=lfs diff=lfs merge=lfs -text
30
  *.zst filter=lfs diff=lfs merge=lfs -text
31
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
29
  *.zip filter=lfs diff=lfs merge=lfs -text
30
  *.zst filter=lfs diff=lfs merge=lfs -text
31
  *tfevents* filter=lfs diff=lfs merge=lfs -text
32
+ assets/PromptSource[[:space:]]ACL[[:space:]]Demo[[:space:]]Figure.png filter=lfs diff=lfs merge=lfs -text
API_DOCUMENTATION.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Manipulating prompts
2
+ PromptSource implements 4 classes to store, manipulate and use prompts and their metadata: `Template`, `Metadata`, `DatasetTemplates` and `TemplateCollection`. All of them are implemented in [`templates.py`](promptsource/templates.py)
3
+
4
+ ## Class `Template` and `Metadata`
5
+ `Template` is a class that wraps a prompt, its associated metadata, and implements the helper functions to use the prompt.
6
+
7
+ Instances of `Template` have the following main methods that will come handy:
8
+ * `apply(example, truncate=True, highlight_variables=False)`: Create a prompted example by applying the template to the given example
9
+ - `example` (Dict): the dataset example to create a prompt for
10
+ - `truncate` (Bool, default to `True`): if True, example fields will be truncated to `TEXT_VAR_LENGTH` chars
11
+ - `highlight_variables`(Bool, default to `False`): highlight the added variables (internal use for the app rendering)
12
+ * `get_id()`: Get the uuid of the prompt
13
+ * `get_name()`: Get the name of the prompt
14
+ * `get_reference()`: Get any additional information about the prompt (such as bibliographic reference)
15
+ * `get_answer_choices_list(example)`: If applicable, returns a list of answer choices for a given example.
16
+
17
+ Each `Template` also has a `metadata` attribute, an instance of the class `Metadata` that encapsulates the following 3 attributes:
18
+ * `original_task`: If True, this prompt asks a model to perform the original task designed for this dataset.
19
+ * `choices_in_prompt`: If True, the answer choices are included in the templates such that models see those choices in the input. Only applicable to classification tasks.
20
+ * `metrics`: List of strings denoting metrics to use for evaluation
21
+
22
+ ## Class `DatasetTemplates`
23
+ `DatasetTemplates` is a class that wraps all the prompts (each of them are instances of `Template`) for a specific dataset/subset and implements all the helper functions necessary to read/write to the YAML file in which the prompts are saved.
24
+
25
+ You will likely mainly be interested in getting the existing prompts and their names for a given dataset. You can do that with the following instantiation:
26
+ ```python
27
+ >>> template_key = f"{dataset_name}/{subset_name}" if subset_name is not None else dataset_name
28
+ >>> prompts = DatasetTemplates(template_key)
29
+ >>> len(prompts) # Returns the number of prompts for the given dataset
30
+ >>> prompts.all_template_names # Returns a sorted list of all templates names for this dataset
31
+ ```
32
+
33
+ ## Class `TemplateCollection`
34
+ `TemplateCollection` is a class that encapsulates all the prompts available under PromptSource by wrapping the `DatasetTemplates` class. It initializes the `DatasetTemplates` for all existing template folders, gives access to each `DatasetTemplates`, and provides aggregated counts overall `DatasetTemplates`.
35
+
36
+ The main methods are:
37
+ * `get_dataset(dataset_name, subset_name)`: Return the DatasetTemplates object corresponding to the dataset name
38
+ - `dataset_name` (Str): name of the dataset to get
39
+ - `subset_name` (Str, default to None): name of the subset
40
+ * `get_templates_count()`: Return the overall number count over all datasets. NB: we don't breakdown datasets into subsets for the count, i.e subsets count are included into the dataset count
CITATION.cff ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cff-version: "0.2.2"
2
+ date-released: 2022-02
3
+ message: "If you use this software, please cite it using these metadata."
4
+ title: "PromptSource"
5
+ url: "https://github.com/bigscience-workshop/promptsource"
6
+ authors:
7
+ - family-names: Bach
8
+ given-names: "Stephen H."
9
+ - family-names: Sanh
10
+ given-names: Victor
11
+ - family-names: Yong
12
+ given-names: Zheng-Xin
13
+ - family-names: Webson
14
+ given-names: Albert
15
+ - family-names: Raffel
16
+ given-names: Colin
17
+ - family-names: Nayak
18
+ given-names: "Nihal V."
19
+ - family-names: Sharma
20
+ given-names: Abheesht
21
+ - family-names: Kim
22
+ given-names: Taewoon
23
+ - family-names: Bari
24
+ given-names: "M Saiful"
25
+ - family-names: Fevry
26
+ given-names: Thibault
27
+ - family-names: Alyafeaiu
28
+ given-names: Zaid
29
+ - family-names: Dey
30
+ given-names: Manan
31
+ - family-names: Santilli
32
+ given-names: Andrea
33
+ - family-names: Sun
34
+ given-names: Zhiqing
35
+ - family-names: Ben-David
36
+ given-names: Srulik
37
+ - family-names: Xu
38
+ given-names: Canwen
39
+ - family-names: Chhablani
40
+ given-names: Gunjan
41
+ - family-names: Wang
42
+ given-names: Han
43
+ - family-names: Fries
44
+ given-names: "Jason Alan"
45
+ - family-names: Al-shaibani
46
+ given-names: "Maged S."
47
+ - family-names: Sharma
48
+ given-names: Shanya
49
+ - family-names: Thakker
50
+ given-names: Urmish
51
+ - family-names: Almubarak
52
+ given-names: Khalid
53
+ - family-names: Tang
54
+ given-names: Xiangru
55
+ - family-names: Tian-Jian
56
+ given-names: Mike
57
+ - family-names: Rush
58
+ given-names: "Alexander M."
59
+ preferred-citation:
60
+ type: article
61
+ authors:
62
+ - family-names: Bach
63
+ given-names: "Stephen H."
64
+ - family-names: Sanh
65
+ given-names: Victor
66
+ - family-names: Yong
67
+ given-names: Zheng-Xin
68
+ - family-names: Webson
69
+ given-names: Albert
70
+ - family-names: Raffel
71
+ given-names: Colin
72
+ - family-names: Nayak
73
+ given-names: "Nihal V."
74
+ - family-names: Sharma
75
+ given-names: Abheesht
76
+ - family-names: Kim
77
+ given-names: Taewoon
78
+ - family-names: Bari
79
+ given-names: "M Saiful"
80
+ - family-names: Fevry
81
+ given-names: Thibault
82
+ - family-names: Alyafeaiu
83
+ given-names: Zaid
84
+ - family-names: Dey
85
+ given-names: Manan
86
+ - family-names: Santilli
87
+ given-names: Andrea
88
+ - family-names: Sun
89
+ given-names: Zhiqing
90
+ - family-names: Ben-David
91
+ given-names: Srulik
92
+ - family-names: Xu
93
+ given-names: Canwen
94
+ - family-names: Chhablani
95
+ given-names: Gunjan
96
+ - family-names: Wang
97
+ given-names: Han
98
+ - family-names: Fries
99
+ given-names: "Jason Alan"
100
+ - family-names: Al-shaibani
101
+ given-names: "Maged S."
102
+ - family-names: Sharma
103
+ given-names: Shanya
104
+ - family-names: Thakker
105
+ given-names: Urmish
106
+ - family-names: Almubarak
107
+ given-names: Khalid
108
+ - family-names: Tang
109
+ given-names: Xiangru
110
+ - family-names: Tian-Jian
111
+ given-names: Mike
112
+ - family-names: Rush
113
+ given-names: "Alexander M."
114
+ title: "PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts"
115
+ year: 2022
116
+ publisher: "arXiv"
117
+ url: "https://arxiv.org/abs/2202.01279"
118
+ address: "Online"
CODEOWNERS ADDED
@@ -0,0 +1 @@
 
 
1
+ @bigscience-workshop/promptsource-codeowners
CONTRIBUTING.md ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing
2
+
3
+ The best way to contribute growing P3 is by writing prompts for new datasets!
4
+
5
+ ### What are Prompts?
6
+
7
+ A prompt consists of a template: input template and target template, along with collection of associated metadata. A template is a piece of code written in a templating language called
8
+ [Jinja](https://jinja.palletsprojects.com/en/3.0.x/). A template defines
9
+ a function that maps an example from a dataset in the
10
+ [Hugging Face datasets library](https://huggingface.co/datasets) to two strings of
11
+ text. The first is called the _input_ which provides all information that
12
+ will be available to solve a task, such as the instruction and the context.
13
+ The second piece is called the _target_, which is the desired response to the
14
+ prompt.
15
+
16
+ ### Quick-Start Guide to Writing Prompts
17
+
18
+ 1. **Set up the app.** Fork the app and set up using the
19
+ [README](https://github.com/bigscience-workshop/promptsource/blob/main/README.md).
20
+ 1. **Examine the dataset.** In the "Sourcing" mode, select or type the dataset into the dropdown.
21
+ If the dataset has subsets (subsets are not the same as splits), you can select
22
+ which one to work on. Note that prompts are subset-specific. You can find
23
+ out background information on the dataset by reading the information in the
24
+ app. The dataset is a collection of examples, and each example is a Python
25
+ dictionary. The sidebar will tell you the schema that each example has.
26
+ 1. **Start a new prompt**. Enter a name for your first prompt and hit "Create."
27
+ You can always update the name later. If you want to cancel the prompt, select
28
+ "Delete Prompt."
29
+ 1. **Write the prompt**. In the box labeled "Template," enter a Jinja expression.
30
+ See the [getting started guide](#getting-started-using-jinja-to-write-prompts)
31
+ and [cookbook](#jinja-cookbook) for details on how to write templates.
32
+ 1. **Fill in metadata**. Fill in the metadata for the current prompt: reference, original task, choices in templates, metrics, languages, and answer choices.
33
+ See [Metadata](#metadata) for more details about these fields.
34
+ 1. **Save the prompt**. Hit the "Save" button. The output of the prompt
35
+ applied to the current example will appear in the right sidebar.
36
+ 1. **Verify the prompt**. Check that you didn't miss any case by scrolling
37
+ through a handful of examples of the prompted dataset using the
38
+ "Prompted dataset viewer" mode.
39
+ 1. **Write between 5 and 10 prompts**. Repeat the steps 4 to 9 to create between 5
40
+ and 10 (more if you want!) prompts per dataset/subset. Feel free to introduce
41
+ a mix of formats, some that follow the templates listed in the [best practices](#best-practices)
42
+ and some that are more diverse in the format and the formulation.
43
+ 1. **Duplicate the prompts(s).** If the dataset you have chosen bear the same
44
+ format as other datasets (for instance, `MNLI` and `SNLI` have identical formats),
45
+ you can simply duplicate the prompts you have written to these additional datasets.
46
+ 1. **Upload the template(s).** Submit a PR using the instructions
47
+ [here](#uploading-prompts).
48
+
49
+ ## Getting Started Using Jinja to Write Prompts
50
+
51
+ Here is a quick crash course on using [Jinja](https://jinja.palletsprojects.com/en/3.0.x/)
52
+ to write templates. More advanced usage is in the [cookbook](#jinja-cookbook).
53
+
54
+ Generally, in a template, you'll want to use a mix of hard-coded data that is
55
+ task-specific and stays the same across examples, and commands that tailor the
56
+ input and target to a specific example.
57
+
58
+ To write text that should be rendered as written, just write it normally. The
59
+ following "template" will produce the same text every time:
60
+ ```jinja2
61
+ This is just literal text that will be printed the same way every time.
62
+ ```
63
+
64
+ To make your template do something more interesting, you'll need to use Jinja
65
+ expressions. Jinja expressions are surrounded by curly braces `{` and `}`.
66
+ One common thing you'll want to do is access information in the dataset example.
67
+ When applied to an example, you can access any value in the example dictionary
68
+ via its key. If you just want to print that value surround it in double curly
69
+ braces. For example, if you want to print a value with the key `text`, use this:
70
+ ```jinja2
71
+ The text in this example is {{ text }}.
72
+ ```
73
+
74
+ You can also use information from the example to control behavior. For example,
75
+ suppose we have a label with the key `label` in our example, which either has a
76
+ value of 0 or 1. That's not very "natural" language, so maybe we want to decide
77
+ which label name to use based on the example. We can do this by creating a list
78
+ and indexing it with the example key:
79
+ ```jinja2
80
+ The label for this example is {{ ["Label A", "Label B"][label] }}.
81
+ ```
82
+ We can also use dictionaries for the same thing:
83
+ ```jinja2
84
+ The label for this example is {{
85
+ {"a": "Label A",
86
+ "b": "Label B"
87
+ }[label]
88
+ }}.
89
+ ```
90
+
91
+ Note that some things in a template are particular to the task, and should not be
92
+ modified by downstream steps that try to increase the diversity of the prompts.
93
+ A common example is listing label names in the prompt to provide choices. Anything
94
+ that should not be modified by data augmentation should be surrounded by double
95
+ curly braces and quoted. For example:
96
+ ```jinja2
97
+ The choices are {{"a"}}, {{"b"}}, and {{"c"}}.
98
+ ```
99
+ You can leave binary options like yes/no, true/false, etc. unprotected.
100
+
101
+ Finally, remember that a template must produce two strings: an input and a target.
102
+ To separate these two pieces, use three vertical bars `|||`.
103
+ So, a complete template for Squad could be:
104
+ ```jinja2
105
+ I'm working on the final exam for my class and am trying to figure out the answer
106
+ to the question "{{question}}" I found the following info on Wikipedia and I think
107
+ it has the answer. Can you tell me the answer?
108
+ {{context}}
109
+ |||
110
+ {{answers["text"][0]}}'
111
+ ```
112
+
113
+ ## Metadata
114
+ In addition to the template itself, you need to fill out several other fields.
115
+ These metadata facilitate finding and using the prompts.
116
+ * **Prompt Reference.** If your template was inspired by a paper, note the
117
+ reference in the "Prompt Reference" section. You can also add a description of
118
+ what your template does.
119
+ * **Original Task?** The checkbox should be checked if the template requires solving a
120
+ task that the underlying dataset is used to study. For example, a template that asks a
121
+ question from a question answering dataset would be an original task template, but one that asks
122
+ to generate a question for a given answer would not.
123
+ * **Choices in Template?** The checkbox should be checked if the input explicitly indicates
124
+ the options for the possible outputs (regardless of whether `answer_choices` is used).
125
+ * **Metrics.** Use the multiselect widget to select all metrics commonly used to evaluate
126
+ this task. Choose “Other” if there is one that is not included in the list.
127
+ * **Languages.** Use the multiselect widget to select all languages used in the prompt. This is independent of what languages are used in the underlying dataset. For example, you could have an English prompt for a Spanish dataset.
128
+ * **Answer Choices.** If the prompt has a small set of possible outputs (e.g., Yes/No,
129
+ class labels, entailment judgements, etc.), then the prompt should define and use answer
130
+ choices as follows. This allows evaluation to consider just the possible targets for
131
+ scoring model outputs. The answer choices field is a Jinja expression that should produce
132
+ a `|||` separated list of all possible targets. If the choices don't change from example
133
+ to example, then you can just list them. For example, AG News is
134
+ ```jinja2
135
+ World News ||| Sports ||| Business ||| Science and Technology
136
+ ```
137
+ Note that whitespace is stripped from the ends of the choices. If answer choices are set,
138
+ then they are also available to Jinja in the prompt itself in the form of a list called
139
+ `answer_choices`. You should use this list in both input and target templates so that the
140
+ resulting inputs and targets match the answer choices field exactly. For example, a prompt
141
+ for AG News could use `answer_choices` like this:
142
+ ```jinja2
143
+ {{text}} Which of the following sections of a newspaper would
144
+ this article likely appear in? {{answer_choices[0]}}, {{answer_choices[1]}},
145
+ {{answer_choices[2]}}, or {{answer_choices[3]}}?
146
+ |||
147
+ {{ answer_choices[label] }}
148
+ ```
149
+ Since Answer Choices is a Jinja expression that has access to the example, it can also be used
150
+ to extract example-specific choices from the underlying data. For example, in AI2 ARC, we could
151
+ use
152
+ ```jinja2
153
+ {{choices.text | join("|||")}}
154
+ ```
155
+
156
+ ## Best Practices
157
+
158
+ * **Writing target templates.** The target template should only contain the answer to the task.
159
+ It should not contain any extra text such as “The answer is…” (unless that extra text is also in
160
+ `answer_choices`). If `answer_choices` is populated, the output should only contain the values
161
+ in `answer_choices`.
162
+ * **Formatting multple-choice questions.** If the target should match the name of the choice
163
+ (e.g., “World News”), then it should list the choices either as part of a grammatical question
164
+ or a list with the marker for each (e.g, dashes). If the target should indicate the choice from
165
+ the list (e.g., “A,” “Explanation 1,” etc.), then it should list the choices with the indicator
166
+ before each one.
167
+ * **Choosing input and target pairs.** Lots of datasets have multiple columns that can be
168
+ combined to form different (input, target) pairs i.e. different "tasks". Don't hesitate to
169
+ introduce some diversity by prompting a given dataset into multiple tasks and provide some
170
+ description in the "Template Reference" text box. An example is given
171
+ in the already prompted `movie_rationales`.
172
+ * **Filtering prompts.** If a prompt is applied to an example and produces an
173
+ empty string, that prompt/example pair will be skipped.
174
+ You can therefore create prompts that only apply to a subset of the examples by
175
+ wrapping them in Jinja if statements. For example, in the `TREC` dataset, there
176
+ are fine-grained categories that are only applicable to certain coarse-grained categories.
177
+ We can capture this with the following prompt:
178
+ ```jinja2
179
+ {% if label_coarse == 0 %}
180
+ Is this question asking for a {{"definition"}}, a {{"description"}}, a {{"manner of action"}}, or a {{"reason"}}?
181
+ {{text}}
182
+ |||
183
+ {{ {0: "Manner", 7: "Defintion", 9: "Reason", 12: "Description"}[label_fine] }}
184
+ {% endif %}
185
+ ```
186
+ For datasets that have splits with no labels (for instance test split without ground truth labels), you can wrap the conditional statement on the target side.
187
+ For instance for `super_glue/boolq`, the following prompt would return an empty target on the test split, but not an empty prompted example:
188
+ ```jinja2
189
+ {{ passage }}
190
+ Question: {{ question }}
191
+ Answer:
192
+ |||
193
+ {% if label != -1 %}
194
+ {{ answer_choices[label] }}
195
+ {% endif %}
196
+ ```
197
+ * **Conditional generation format.** Always specify the target and separate it from the prompt
198
+ by indicating the vertical bars `|||`. The target will be generated by a generative model
199
+ conditioned on the input you wrote. You can always transform an "infix" prompt format
200
+ ```jinja2
201
+ Given that {{premise}}, it {{ ["must be true", "might be true", "must be false"][label] }} that {{hypothesis}}
202
+ ```
203
+ into a conditional generation format
204
+ ```jinja2
205
+ Given that {{premise}}, it {{ "must be true, might be true, or must be false" }} that {{hypothesis}}?|||
206
+ {{ ["must be true", "might be true", "must be false"][label] }}
207
+ ```
208
+ * **Pre-defined formats.** The goal is to collect a diverse set of prompts with diverse formats, but
209
+ we also want to include a few less diverse prompts that follow the following two structures:
210
+ 1) A question-answer pair with optional multiple choices like:
211
+ ```
212
+ [Context] # optional depending on the task
213
+ [Question]
214
+ [Label1], [Label2], [Label3] # optional depending on the task
215
+ ```
216
+ So for SNLI it will look like:
217
+ ```jinja2
218
+ {{premise}}
219
+ Is it the case that {{hypothesis}}?
220
+ {{ "Yes" }}, {{ "No" }}, {{ "Maybe" }} ||| {{ ["Yes", "No", "Maybe"][label] }}
221
+ ```
222
+
223
+ 2) Task description followed by the input. So for SNLI it will look like:
224
+ ```jinja2
225
+ Determine the relation between the following two sentences. The relations are entailment, contradiction, or neutral.
226
+ {{premise}}
227
+ {{hypothesis}} ||| {{label}}
228
+ ```
229
+ * **Setting variables.** You might want to use the Jinja expression `{% set %}` to define a variable. If you do,
230
+ do it at the beginning of the prompt, outside any conditional statements, so that the automatic prompt checks
231
+ recognize that the variable is defined.
232
+
233
+ ## More Examples
234
+
235
+ Here are a few interesting examples of prompts with explanations.
236
+
237
+ Here's one for `hellaswag`:
238
+ ```jinja2
239
+ First, {{ ctx_a.lower() }} Then, {{ ctx_b.lower() }}...
240
+
241
+ Complete the above description with a chosen ending:
242
+
243
+ (a) {{ answer_choices[0] }}
244
+
245
+ (b) {{ answer_choices[1] }}
246
+
247
+ (c) {{ answer_choices[2] }}
248
+
249
+ (d) {{ answer_choices[3] }}
250
+
251
+ ||| {{ answer_choices[label | int()] }}
252
+ ```
253
+ Notice how it uses functions to consistently capitalize the information and provides lots
254
+ of context (referring explicitly to "description" and "chosen ending.")
255
+
256
+ Here's one for `head_qa`:
257
+ ```jinja2
258
+ Given this list of statements about {{category}}: {{ answers | map(attribute="atext")
259
+ | map("lower") | map("trim", ".") | join(", ") }}.
260
+ Which one is the most appropriate answer/completion for the paragraph that follows?
261
+ {{qtext}}
262
+ |||
263
+ {% for answer in answers if answer["aid"]==ra -%}
264
+ {{answer["atext"]}}
265
+ {%- endfor %}
266
+ ```
267
+ Like above, it uses functions to present the choices in a readable way. Also, it
268
+ uses a for loop with conditions to handle the more intricate dataset schema.
269
+
270
+ Here's one for `paws`:
271
+ ```jinja2
272
+ Sentence 1: {{sentence1}}
273
+ Sentence 2: {{sentence2}}
274
+ Question: Does Sentence 1 paraphrase Sentence 2? Yes or No?
275
+ |||
276
+ {{answer_choices[label]}}
277
+ ```
278
+ Notice that the choices `Yes or No` are not escaped. Yes/no, true/false
279
+ are choices that do not need to be escaped (unlike categories).
280
+
281
+ ## Uploading Prompts
282
+
283
+ Once you save or modify a template, the corresponding file inside the `templates`
284
+ directory in the repo will be modified. To upload it, follow these steps:
285
+ 1. Run `make style` and `make quality`.
286
+ 2. Commit the modified template files (anything under `templates`) to git.
287
+ 3. Push to your fork on GitHub.
288
+ 4. Open a pull request against `main` on the PromptSource repo.
289
+
290
+
291
+ ## Jinja Cookbook
292
+
293
+ - Accessing nested attributes of a dict
294
+ ```jinja
295
+ {{ answers_spans.spans }}
296
+ ```
297
+
298
+ - Joining list
299
+ ```jinja=
300
+ {{ spans_list | join(", ") }}
301
+ ```
302
+
303
+ - If conditions
304
+ ```jinja
305
+ {% if label==0 %}
306
+ do_something
307
+ {% elif condition %}
308
+ do_something_else
309
+ {% endif %}
310
+ ```
311
+ - Using `zip()` to zip multiple lists
312
+ ```jinja
313
+ {% for a, b in zip(list_A, list_B) %}
314
+ do_something_with_a_and_b
315
+ {% endfor %}
316
+ ```
317
+
318
+
319
+ Jinja includes lots of complex features but for most instances you likely only
320
+ need to use the methods above. If there's something you're not sure how to do,
321
+ just open an issue. We'll collect other frequent patterns here.
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
Makefile ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: quality style
2
+
3
+ check_dirs := promptsource
4
+
5
+ # Check that source code meets quality standards
6
+
7
+ quality:
8
+ black --check --line-length 119 --target-version py38 $(check_dirs)
9
+ isort --check-only $(check_dirs)
10
+ flake8 $(check_dirs) --max-line-length 119
11
+
12
+ # Format source code automatically
13
+
14
+ style:
15
+ black --line-length 119 --target-version py38 $(check_dirs)
16
+ isort $(check_dirs)
README.md CHANGED
@@ -4,9 +4,13 @@ emoji: 👁
4
  colorFrom: red
5
  colorTo: blue
6
  sdk: streamlit
7
- sdk_version: 1.10.0
8
- app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
4
  colorFrom: red
5
  colorTo: blue
6
  sdk: streamlit
7
+ sdk_version: 0.82
8
+ app_file: promptsource/app.py
9
  pinned: false
10
  ---
11
 
12
+ PromptSource is a toolkit for creating, sharing and using natural language prompts. This Space is a hosted demo of Promptsource and allows you to browse through existing prompts.
13
+
14
+ More information about Promptsource and how to use it is available on the [Github repository](https://github.com/bigscience-workshop/promptsource).
15
+
16
+ NB: As of now, this Space is not synched with the Github repository automatically and captures the state of the repository on October 21, 2022.
README_promptsource.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PromptSource
2
+ **PromptSource is a toolkit for creating, sharing and using natural language prompts.**
3
+
4
+ Recent work has shown that large language models exhibit the ability to perform reasonable zero-shot generalization to new tasks. For instance, [GPT-3](https://arxiv.org/abs/2005.14165) demonstrated that large language models have strong zero- and few-shot abilities. [FLAN](https://arxiv.org/abs/2109.01652) and [T0](https://arxiv.org/abs/2110.08207) then demonstrated that pre-trained language models fine-tuned in a massively multitask fashion yield even stronger zero-shot performance. A common denominator in these works is the use of prompts which have gathered of interest among NLP researchers and engineers. This emphasizes the need for new tools to create, share and use natural language prompts.
5
+
6
+ Prompts are functions that map an example from a dataset to a natural language input and target output PromptSource contains a growing collection of prompts (which we call **P3**: **P**ublic **P**ool of **P**rompts). As of January 20, 2022, there are ~2'000 English prompts for 170+ English datasets in [P3](https://huggingface.co/datasets/bigscience/P3).
7
+
8
+ <p align="center">
9
+ <img src="assets/PromptSource ACL Demo Figure.png" width="800"/>
10
+ </p>
11
+
12
+ PromptSource provides the tools to create, and share natural language prompts (see [How to create prompts](#how-to-create-prompts), and then use the thousands of existing and newly created prompts through a simple API (see [How to use prompts](#how-to-use-prompts)). Prompts are saved in standalone structured files and are written in a simple templating language called Jinja. An example of prompt availabe in PromptSource for [SNLI](https://huggingface.co/datasets/snli) is:
13
+ ```jinja2
14
+ {{premise}}
15
+
16
+ Question: Does this imply that "{{hypothesis}}"? Yes, no, or maybe? ||| {{answer_choices[label]}}
17
+ ```
18
+
19
+ **You can browse through existing prompts on the [hosted version of PromptSource](https://bigscience.huggingface.co/promptsource).**
20
+
21
+ ## Setup
22
+ If you do not intend to modify prompts, you can simply run:
23
+ ```bash
24
+ pip install promptsource
25
+ ```
26
+
27
+ Otherwise, you need to install the repo locally:
28
+ 1. Download the repo
29
+ 1. Navigate to the root directory of the repo
30
+ 1. Run `pip install -e .` to install the `promptsource` module
31
+
32
+ *Note: for stability reasons, you will currently need a Python 3.7 environment to run the last step. However, if you only intend to use the prompts, and not create new prompts through the interface, you can remove this constraint in the [`setup.py`](setup.py) and install the package locally.*
33
+
34
+ ## How to use prompts
35
+ You can apply prompts to examples from datasets of the [Hugging Face Datasets library](https://github.com/huggingface/datasets).
36
+ ```python
37
+ # Load an example from the datasets ag_news
38
+ >>> from datasets import load_dataset
39
+ >>> dataset = load_dataset("ag_news", split="train")
40
+ >>> example = dataset[1]
41
+
42
+ # Load prompts for this dataset
43
+ >>> from promptsource.templates import DatasetTemplates
44
+ >>> ag_news_prompts = DatasetTemplates('ag_news')
45
+
46
+ # Print all the prompts available for this dataset. The keys of the dict are the uuids the uniquely identify each of the prompt, and the values are instances of `Template` which wraps prompts
47
+ >>> print(ag_news_prompts.templates)
48
+ {'24e44a81-a18a-42dd-a71c-5b31b2d2cb39': <promptsource.templates.Template object at 0x7fa7aeb20350>, '8fdc1056-1029-41a1-9c67-354fc2b8ceaf': <promptsource.templates.Template object at 0x7fa7aeb17c10>, '918267e0-af68-4117-892d-2dbe66a58ce9': <promptsource.templates.Template object at 0x7fa7ac7a2310>, '9345df33-4f23-4944-a33c-eef94e626862': <promptsource.templates.Template object at 0x7fa7ac7a2050>, '98534347-fff7-4c39-a795-4e69a44791f7': <promptsource.templates.Template object at 0x7fa7ac7a1310>, 'b401b0ee-6ffe-4a91-8e15-77ee073cd858': <promptsource.templates.Template object at 0x7fa7ac7a12d0>, 'cb355f33-7e8c-4455-a72b-48d315bd4f60': <promptsource.templates.Template object at 0x7fa7ac7a1110>}
49
+
50
+ # Select a prompt by its name
51
+ >>> prompt = ag_news_prompts["classify_question_first"]
52
+
53
+ # Apply the prompt to the example
54
+ >>> result = prompt.apply(example)
55
+ >>> print("INPUT: ", result[0])
56
+ INPUT: What label best describes this news article?
57
+ Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\which has a reputation for making well-timed and occasionally\controversial plays in the defense industry, has quietly placed\its bets on another part of the market.
58
+ >>> print("TARGET: ", result[1])
59
+ TARGET: Business
60
+ ```
61
+
62
+ In the case that you are looking for the prompts available for a particular subset of a dataset, you should use the following syntax:
63
+ ```python
64
+ dataset_name, subset_name = "super_glue", "rte"
65
+
66
+ dataset = load_dataset(f"{dataset_name}/{subset_name}", split="train")
67
+ example = dataset[0]
68
+
69
+ prompts = DatasetTemplates(f"{dataset_name}/{subset_name}")
70
+ ```
71
+
72
+ You can also collect all the available prompts for their associated datasets:
73
+
74
+ ```python
75
+ >>> from promptsource.templates import TemplateCollection
76
+
77
+ # Get all the prompts available in PromptSource
78
+ >>> collection = TemplateCollection()
79
+
80
+ # Print a dict where the key is the pair (dataset name, subset name)
81
+ # and the value is an instance of DatasetTemplates
82
+ >>> print(collection.datasets_templates)
83
+ {('poem_sentiment', None): <promptsource.templates.DatasetTemplates object at 0x7fa7ac7939d0>, ('common_gen', None): <promptsource.templates.DatasetTemplates object at 0x7fa7ac795410>, ('anli', None): <promptsource.templates.DatasetTemplates object at 0x7fa7ac794590>, ('cc_news', None): <promptsource.templates.DatasetTemplates object at 0x7fa7ac798a90>, ('craigslist_bargains', None): <promptsource.templates.DatasetTemplates object at 0x7fa7ac7a2c10>,...}
84
+ ```
85
+
86
+ You can learn more about PromptSource's API to store, manipulate and use prompts in the [documentation](API_DOCUMENTATION.md).
87
+
88
+ ## How to create prompts
89
+ PromptSource provides a Web-based GUI that enables developers to write prompts in a templating language and immediately view their outputs on different examples.
90
+
91
+ There are 3 modes in the app:
92
+ - **Sourcing**: create and write new prompts
93
+ - **Prompted dataset viewer**: check the prompts you wrote (or the existing ones) on the entire dataset
94
+ - **Helicopter view**: aggregate high-level metrics on the current state of P3
95
+
96
+ <p align="center">
97
+ <img src="assets/promptsource_app.png" width="800"/>
98
+ </p>
99
+
100
+ To launch the app locally, please first make sure you have followed the steps in [Setup](#setup), and from the root directory of the repo, run:
101
+ ```bash
102
+ streamlit run promptsource/app.py
103
+ ```
104
+
105
+ You can also browse through existing prompts on the [hosted version of PromptSource](https://bigscience.huggingface.co/promptsource). Note the hosted version disables the Sourcing mode (`streamlit run promptsource/app.py -- --read-only`).
106
+
107
+ ### Writing prompts
108
+ Before creating new prompts, you should read the [contribution guidelines](CONTRIBUTING.md) which give an step-by-step description of how to contribute to the collection of prompts.
109
+
110
+ ### Datasets that require manual downloads
111
+ Some datasets are not handled automatically by `datasets` and require users to download the dataset manually (`story_cloze` for instance ).
112
+
113
+ To handle those datasets as well, we require users to download the dataset and put it in `~/.cache/promptsource`. This is the root directory containing all manually downloaded datasets.
114
+
115
+ You can override this default path using `PROMPTSOURCE_MANUAL_DATASET_DIR` environment variable. This should point to the root directory.
116
+
117
+ ## Development structure
118
+ PromptSource and P3 were originally developed as part of the [BigScience project for open research 🌸](https://bigscience.huggingface.co/), a year-long initiative targeting the study of large models and datasets. The goal of the project is to research language models in a public environment outside large technology companies. The project has 600 researchers from 50 countries and more than 250 institutions.
119
+
120
+ In particular, PromptSource and P3 were the first steps for the paper [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207).
121
+
122
+ **You will find the official repository to reproduce the results of the paper here: https://github.com/bigscience-workshop/t-zero.** We also released T0* (pronounce "T Zero"), a series of models trained on [P3](https://huggingface.co/datasets/bigscience/P3) and presented in the paper. Checkpoints are available [here](https://huggingface.co/bigscience/T0pp).
123
+
124
+ ## Known Issues
125
+ **Warning or Error about Darwin on OS X:** Try downgrading PyArrow to 3.0.0.
126
+
127
+ **ConnectionRefusedError: [Errno 61] Connection refused:** Happens occasionally. Try restarting the app.
128
+
129
+ ## Citation
130
+ If you find P3 or PromptSource useful, please cite the following reference:
131
+ ```bibtex
132
+ @misc{bach2022promptsource,
133
+ title={PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts},
134
+ author={Stephen H. Bach and Victor Sanh and Zheng-Xin Yong and Albert Webson and Colin Raffel and Nihal V. Nayak and Abheesht Sharma and Taewoon Kim and M Saiful Bari and Thibault Fevry and Zaid Alyafeai and Manan Dey and Andrea Santilli and Zhiqing Sun and Srulik Ben-David and Canwen Xu and Gunjan Chhablani and Han Wang and Jason Alan Fries and Maged S. Al-shaibani and Shanya Sharma and Urmish Thakker and Khalid Almubarak and Xiangru Tang and Xiangru Tang and Mike Tian-Jian Jiang and Alexander M. Rush},
135
+ year={2022},
136
+ eprint={2202.01279},
137
+ archivePrefix={arXiv},
138
+ primaryClass={cs.LG}
139
+ }
140
+ ```
assets/PromptSource ACL Demo Figure.png ADDED

Git LFS Details

  • SHA256: 55f0805843a41274c819ca2a90658985d03cb026bfeaf82c929bbd2da0132a16
  • Pointer size: 132 Bytes
  • Size of remote file: 3.54 MB
assets/promptsource_app.png ADDED
promptsource/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from pathlib import Path
2
+
3
+
4
+ DEFAULT_PROMPTSOURCE_CACHE_HOME = str(Path("~/.cache/promptsource").expanduser())
promptsource/app.py ADDED
@@ -0,0 +1,663 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import functools
3
+ import multiprocessing
4
+ import os
5
+ import textwrap
6
+ from hashlib import sha256
7
+ from multiprocessing import Manager, Pool
8
+
9
+ import pandas as pd
10
+ import plotly.express as px
11
+ import streamlit as st
12
+ from datasets import get_dataset_infos
13
+ from datasets.info import DatasetInfosDict
14
+ from pygments import highlight
15
+ from pygments.formatters import HtmlFormatter
16
+ from pygments.lexers import DjangoLexer
17
+
18
+ from promptsource import DEFAULT_PROMPTSOURCE_CACHE_HOME
19
+ from promptsource.session import _get_state
20
+ from promptsource.templates import INCLUDED_USERS, LANGUAGES, METRICS, DatasetTemplates, Template, TemplateCollection
21
+ from promptsource.utils import (
22
+ get_dataset,
23
+ get_dataset_confs,
24
+ list_datasets,
25
+ removeHyphen,
26
+ renameDatasetColumn,
27
+ render_features,
28
+ )
29
+
30
+
31
+ DATASET_INFOS_CACHE_DIR = os.path.join(DEFAULT_PROMPTSOURCE_CACHE_HOME, "DATASET_INFOS")
32
+ os.makedirs(DATASET_INFOS_CACHE_DIR, exist_ok=True)
33
+
34
+ # Python 3.8 switched the default start method from fork to spawn. OS X also has
35
+ # some issues related to fork, eee, e.g., https://github.com/bigscience-workshop/promptsource/issues/572
36
+ # so we make sure we always use spawn for consistency
37
+ multiprocessing.set_start_method("spawn", force=True)
38
+
39
+
40
+ def get_infos(all_infos, d_name):
41
+ """
42
+ Wrapper for mutliprocess-loading of dataset infos
43
+
44
+ :param all_infos: multiprocess-safe dictionary
45
+ :param d_name: dataset name
46
+ """
47
+ d_name_bytes = d_name.encode("utf-8")
48
+ d_name_hash = sha256(d_name_bytes)
49
+ foldername = os.path.join(DATASET_INFOS_CACHE_DIR, d_name_hash.hexdigest())
50
+ if os.path.isdir(foldername):
51
+ infos_dict = DatasetInfosDict.from_directory(foldername)
52
+ else:
53
+ infos = get_dataset_infos(d_name)
54
+ infos_dict = DatasetInfosDict(infos)
55
+ os.makedirs(foldername)
56
+ infos_dict.write_to_directory(foldername)
57
+ all_infos[d_name] = infos_dict
58
+
59
+
60
+ def format_language(tag):
61
+ """
62
+ Formats a language tag for display in the UI.
63
+
64
+ For example, if the tag is "en", then the function returns "en (English)"
65
+ :param tag: language tag
66
+ :return: formatted language name
67
+ """
68
+ return tag + " (" + LANGUAGES[tag] + ")"
69
+
70
+
71
+ # add an argument for read-only
72
+ # At the moment, streamlit does not handle python script arguments gracefully.
73
+ # Thus, for read-only mode, you have to type one of the below two:
74
+ # streamlit run promptsource/app.py -- -r
75
+ # streamlit run promptsource/app.py -- --read-only
76
+ # Check https://github.com/streamlit/streamlit/issues/337 for more information.
77
+ parser = argparse.ArgumentParser(description="run app.py with args")
78
+ parser.add_argument("-r", "--read-only", action="store_true", help="whether to run it as read-only mode", default=True)
79
+
80
+ args = parser.parse_args()
81
+ if args.read_only:
82
+ select_options = ["Helicopter view", "Prompted dataset viewer"]
83
+ side_bar_title_prefix = "Promptsource (Read only)"
84
+ else:
85
+ select_options = ["Helicopter view", "Prompted dataset viewer", "Sourcing"]
86
+ side_bar_title_prefix = "Promptsource"
87
+
88
+ #
89
+ # Cache functions
90
+ #
91
+ get_dataset = st.cache(allow_output_mutation=True)(get_dataset)
92
+ get_dataset_confs = st.cache(get_dataset_confs)
93
+ list_datasets = st.cache(list_datasets)
94
+
95
+
96
+ def run_app():
97
+ #
98
+ # Loads session state
99
+ #
100
+ state = _get_state()
101
+
102
+ def reset_template_state():
103
+ state.template_name = None
104
+ state.jinja = None
105
+ state.reference = None
106
+
107
+ #
108
+ # Initial page setup
109
+ #
110
+ st.set_page_config(page_title="Promptsource", layout="wide")
111
+ st.sidebar.markdown(
112
+ "<center><a href='https://github.com/bigscience-workshop/promptsource'>💻Github - Promptsource\n\n</a></center>",
113
+ unsafe_allow_html=True,
114
+ )
115
+ mode = st.sidebar.selectbox(
116
+ label="Choose a mode",
117
+ options=select_options,
118
+ index=0,
119
+ key="mode_select",
120
+ )
121
+ st.sidebar.title(f"{side_bar_title_prefix} 🌸 - {mode}")
122
+
123
+ #
124
+ # Adds pygments styles to the page.
125
+ #
126
+ st.markdown(
127
+ "<style>" + HtmlFormatter(style="friendly").get_style_defs(".highlight") + "</style>", unsafe_allow_html=True
128
+ )
129
+
130
+ WIDTH = 140
131
+
132
+ def show_jinja(t, width=WIDTH):
133
+ def replace_linebreaks(t):
134
+ """
135
+ st.write does not handle double breaklines very well. When it encounters `\n\n`, it exit the curent <div> block.
136
+ Explicitely replacing all `\n` with their html equivalent to bypass this issue.
137
+ Also stripping the trailing `\n` first.
138
+ """
139
+ return t.strip("\n").replace("\n", "<br/>")
140
+
141
+ wrap = textwrap.fill(t, width=width, replace_whitespace=False)
142
+ out = highlight(wrap, DjangoLexer(), HtmlFormatter())
143
+ out = replace_linebreaks(out)
144
+ st.write(out, unsafe_allow_html=True)
145
+
146
+ def show_text(t, width=WIDTH, with_markdown=False):
147
+ wrap = [textwrap.fill(subt, width=width, replace_whitespace=False) for subt in t.split("\n")]
148
+ wrap = "\n".join(wrap)
149
+ if with_markdown:
150
+ st.write(wrap, unsafe_allow_html=True)
151
+ else:
152
+ st.text(wrap)
153
+
154
+ if mode == "Helicopter view":
155
+ st.title("High level metrics")
156
+ st.write("This will take a minute to collect.")
157
+ st.write(
158
+ "If you want to contribute, please refer to the instructions in "
159
+ + "[Contributing](https://github.com/bigscience-workshop/promptsource/blob/main/CONTRIBUTING.md)."
160
+ )
161
+
162
+ #
163
+ # Loads template data
164
+ #
165
+ try:
166
+ template_collection = TemplateCollection()
167
+ except FileNotFoundError:
168
+ st.error(
169
+ "Unable to find the prompt folder!\n\n"
170
+ "We expect the folder to be in the working directory. "
171
+ "You might need to restart the app in the root directory of the repo."
172
+ )
173
+ st.stop()
174
+
175
+ #
176
+ # Global metrics
177
+ #
178
+ counts = template_collection.get_templates_count()
179
+ nb_prompted_datasets = len(counts)
180
+ st.write(f"## Number of *prompted datasets*: `{nb_prompted_datasets}`")
181
+ nb_prompts = sum(counts.values())
182
+ st.write(f"## Number of *prompts*: `{nb_prompts}`")
183
+
184
+ #
185
+ # Metrics per dataset/subset
186
+ #
187
+ # Download dataset infos (multiprocessing download)
188
+ manager = Manager()
189
+ all_infos = manager.dict()
190
+ all_datasets = list(set([t[0] for t in template_collection.keys]))
191
+
192
+ pool = Pool(processes=multiprocessing.cpu_count())
193
+ pool.map(functools.partial(get_infos, all_infos), all_datasets)
194
+ pool.close()
195
+ pool.join()
196
+
197
+ results = []
198
+ for (dataset_name, subset_name) in template_collection.keys:
199
+ # Collect split sizes (train, validation and test)
200
+ if dataset_name not in all_infos:
201
+ infos = get_dataset_infos(dataset_name)
202
+ all_infos[dataset_name] = infos
203
+ else:
204
+ infos = all_infos[dataset_name]
205
+ if infos:
206
+ if subset_name is None:
207
+ subset_infos = infos[list(infos.keys())[0]]
208
+ else:
209
+ subset_infos = infos[subset_name]
210
+
211
+ try:
212
+ split_sizes = {k: v.num_examples for k, v in subset_infos.splits.items()}
213
+ except Exception:
214
+ # Fixing bug in some community datasets.
215
+ # For simplicity, just filling `split_sizes` with nothing, so the displayed split sizes will be 0.
216
+ split_sizes = {}
217
+ else:
218
+ split_sizes = {}
219
+
220
+ # Collect template counts, original task counts and names
221
+ dataset_templates = template_collection.get_dataset(dataset_name, subset_name)
222
+ results.append(
223
+ {
224
+ "Dataset name": dataset_name,
225
+ "Subset name": "∅" if subset_name is None else subset_name,
226
+ "Train size": split_sizes["train"] if "train" in split_sizes else 0,
227
+ "Validation size": split_sizes["validation"] if "validation" in split_sizes else 0,
228
+ "Test size": split_sizes["test"] if "test" in split_sizes else 0,
229
+ "Number of prompts": len(dataset_templates),
230
+ "Number of original task prompts": sum(
231
+ [bool(t.metadata.original_task) for t in dataset_templates.templates.values()]
232
+ ),
233
+ "Prompt names": [t.name for t in dataset_templates.templates.values()],
234
+ }
235
+ )
236
+ results_df = pd.DataFrame(results)
237
+ results_df.sort_values(["Number of prompts"], inplace=True, ascending=False)
238
+ results_df.reset_index(drop=True, inplace=True)
239
+
240
+ nb_training_instances = results_df["Train size"].sum()
241
+ st.write(f"## Number of *training instances*: `{nb_training_instances}`")
242
+
243
+ plot_df = results_df[["Dataset name", "Subset name", "Train size", "Number of prompts"]].copy()
244
+ plot_df["Name"] = plot_df["Dataset name"] + " - " + plot_df["Subset name"]
245
+ plot_df.sort_values(["Train size"], inplace=True, ascending=False)
246
+ fig = px.bar(
247
+ plot_df,
248
+ x="Name",
249
+ y="Train size",
250
+ hover_data=["Dataset name", "Subset name", "Number of prompts"],
251
+ log_y=True,
252
+ title="Number of training instances per data(sub)set - y-axis is in logscale",
253
+ )
254
+ fig.update_xaxes(visible=False, showticklabels=False)
255
+ st.plotly_chart(fig, use_container_width=True)
256
+ st.write(
257
+ f"- Top 3 training subsets account for `{100 * plot_df[:3]['Train size'].sum() / nb_training_instances:.2f}%` of the training instances."
258
+ )
259
+ biggest_training_subset = plot_df.iloc[0]
260
+ st.write(
261
+ f"- Biggest training subset is *{biggest_training_subset['Name']}* with `{biggest_training_subset['Train size']}` instances"
262
+ )
263
+ smallest_training_subset = plot_df[plot_df["Train size"] > 0].iloc[-1]
264
+ st.write(
265
+ f"- Smallest training subset is *{smallest_training_subset['Name']}* with `{smallest_training_subset['Train size']}` instances"
266
+ )
267
+
268
+ st.markdown("***")
269
+ st.write("Details per dataset")
270
+ st.table(results_df)
271
+
272
+ else:
273
+ # Combining mode `Prompted dataset viewer` and `Sourcing` since the
274
+ # backbone of the interfaces is the same
275
+ assert mode in ["Prompted dataset viewer", "Sourcing"], ValueError(
276
+ f"`mode` ({mode}) should be in `[Helicopter view, Prompted dataset viewer, Sourcing]`"
277
+ )
278
+
279
+ #
280
+ # Loads dataset information
281
+ #
282
+
283
+ dataset_list = list_datasets()
284
+ ag_news_index = dataset_list.index("ag_news")
285
+
286
+ #
287
+ # Select a dataset - starts with ag_news
288
+ #
289
+ dataset_key = st.sidebar.selectbox(
290
+ "Dataset",
291
+ dataset_list,
292
+ key="dataset_select",
293
+ index=ag_news_index,
294
+ help="Select the dataset to work on.",
295
+ )
296
+
297
+ #
298
+ # If a particular dataset is selected, loads dataset and template information
299
+ #
300
+ if dataset_key is not None:
301
+
302
+ #
303
+ # Check for subconfigurations (i.e. subsets)
304
+ #
305
+ configs = get_dataset_confs(dataset_key)
306
+ conf_option = None
307
+ if len(configs) > 0:
308
+ conf_option = st.sidebar.selectbox("Subset", configs, index=0, format_func=lambda a: a.name)
309
+
310
+ subset_name = str(conf_option.name) if conf_option else None
311
+ try:
312
+ dataset = get_dataset(dataset_key, subset_name)
313
+ except OSError as e:
314
+ st.error(
315
+ f"Some datasets are not handled automatically by `datasets` and require users to download the "
316
+ f"dataset manually. This applies to {dataset_key}{f'/{subset_name}' if subset_name is not None else ''}. "
317
+ f"\n\nPlease download the raw dataset to `~/.cache/promptsource/{dataset_key}{f'/{subset_name}' if subset_name is not None else ''}`. "
318
+ f"\n\nYou can choose another cache directory by overriding `PROMPTSOURCE_MANUAL_DATASET_DIR` environment "
319
+ f"variable and downloading raw dataset to `$PROMPTSOURCE_MANUAL_DATASET_DIR/{dataset_key}{f'/{subset_name}' if subset_name is not None else ''}`"
320
+ f"\n\nOriginal error:\n{str(e)}"
321
+ )
322
+ st.stop()
323
+
324
+ splits = list(dataset.keys())
325
+ index = 0
326
+ if "train" in splits:
327
+ index = splits.index("train")
328
+ split = st.sidebar.selectbox("Split", splits, key="split_select", index=index)
329
+ dataset = dataset[split]
330
+ dataset = renameDatasetColumn(dataset)
331
+
332
+ #
333
+ # Loads template data
334
+ #
335
+ try:
336
+ dataset_templates = DatasetTemplates(dataset_key, conf_option.name if conf_option else None)
337
+ except FileNotFoundError:
338
+ st.error(
339
+ "Unable to find the prompt folder!\n\n"
340
+ "We expect the folder to be in the working directory. "
341
+ "You might need to restart the app in the root directory of the repo."
342
+ )
343
+ st.stop()
344
+
345
+ template_list = dataset_templates.all_template_names
346
+ num_templates = len(template_list)
347
+ st.sidebar.write(
348
+ "No of prompts created for "
349
+ + f"`{dataset_key + (('/' + conf_option.name) if conf_option else '')}`"
350
+ + f": **{str(num_templates)}**"
351
+ )
352
+
353
+ if mode == "Prompted dataset viewer":
354
+ if num_templates > 0:
355
+ template_name = st.sidebar.selectbox(
356
+ "Prompt name",
357
+ template_list,
358
+ key="template_select",
359
+ index=0,
360
+ help="Select the prompt to visualize.",
361
+ )
362
+
363
+ step = 50
364
+ example_index = st.sidebar.number_input(
365
+ f"Select the example index (Size = {len(dataset)})",
366
+ min_value=0,
367
+ max_value=len(dataset) - step,
368
+ value=0,
369
+ step=step,
370
+ key="example_index_number_input",
371
+ help="Offset = 50.",
372
+ )
373
+ else: # mode = Sourcing
374
+ st.sidebar.subheader("Select Example")
375
+ example_index = st.sidebar.slider("Select the example index", 0, len(dataset) - 1)
376
+
377
+ example = dataset[example_index]
378
+ example = removeHyphen(example)
379
+
380
+ st.sidebar.write(example)
381
+
382
+ st.sidebar.subheader("Dataset Schema")
383
+ rendered_features = render_features(dataset.features)
384
+ st.sidebar.write(rendered_features)
385
+
386
+ #
387
+ # Display dataset information
388
+ #
389
+ st.header("Dataset: " + dataset_key + " " + (("/ " + conf_option.name) if conf_option else ""))
390
+
391
+ # If we have a custom dataset change the source link to the hub
392
+ split_dataset_key = dataset_key.split("/")
393
+ possible_user = split_dataset_key[0]
394
+ if len(split_dataset_key) > 1 and possible_user in INCLUDED_USERS:
395
+ source_link = "https://huggingface.co/datasets/%s/blob/main/%s.py" % (
396
+ dataset_key,
397
+ split_dataset_key[-1],
398
+ )
399
+ else:
400
+ source_link = "https://github.com/huggingface/datasets/blob/master/datasets/%s/%s.py" % (
401
+ dataset_key,
402
+ dataset_key,
403
+ )
404
+
405
+ st.markdown("*Homepage*: " + dataset.info.homepage + "\n\n*Dataset*: " + source_link)
406
+
407
+ md = """
408
+ %s
409
+ """ % (
410
+ dataset.info.description.replace("\\", "") if dataset_key else ""
411
+ )
412
+ st.markdown(md)
413
+
414
+ #
415
+ # Body of the app: display prompted examples in mode `Prompted dataset viewer`
416
+ # or text boxes to create new prompts in mode `Sourcing`
417
+ #
418
+ if mode == "Prompted dataset viewer":
419
+ #
420
+ # Display template information
421
+ #
422
+ if num_templates > 0:
423
+ template = dataset_templates[template_name]
424
+ st.subheader("Prompt")
425
+ st.markdown("##### Name")
426
+ st.text(template.name)
427
+ st.markdown("##### Reference")
428
+ st.text(template.reference)
429
+ st.markdown("##### Original Task? ")
430
+ st.text(template.metadata.original_task)
431
+ st.markdown("##### Choices in template? ")
432
+ st.text(template.metadata.choices_in_prompt)
433
+ st.markdown("##### Metrics")
434
+ st.text(", ".join(template.metadata.metrics) if template.metadata.metrics else None)
435
+ st.markdown("##### Prompt Languages")
436
+ if template.metadata.languages:
437
+ st.text(", ".join([format_language(tag) for tag in template.metadata.languages]))
438
+ else:
439
+ st.text(None)
440
+ st.markdown("##### Answer Choices")
441
+ if template.get_answer_choices_expr() is not None:
442
+ show_jinja(template.get_answer_choices_expr())
443
+ else:
444
+ st.text(None)
445
+ st.markdown("##### Jinja template")
446
+ splitted_template = template.jinja.split("|||")
447
+ st.markdown("###### Input template")
448
+ show_jinja(splitted_template[0].strip())
449
+ if len(splitted_template) > 1:
450
+ st.markdown("###### Target template")
451
+ show_jinja(splitted_template[1].strip())
452
+ st.markdown("***")
453
+
454
+ #
455
+ # Display a couple (steps) examples
456
+ #
457
+ for ex_idx in range(example_index, example_index + step):
458
+ if ex_idx >= len(dataset):
459
+ continue
460
+ example = dataset[ex_idx]
461
+ example = removeHyphen(example)
462
+ col1, _, col2 = st.beta_columns([12, 1, 12])
463
+ with col1:
464
+ st.write(example)
465
+ if num_templates > 0:
466
+ with col2:
467
+ prompt = template.apply(example, highlight_variables=False)
468
+ if prompt == [""]:
469
+ st.write("∅∅∅ *Blank result*")
470
+ else:
471
+ st.write("Input")
472
+ show_text(prompt[0])
473
+ if len(prompt) > 1:
474
+ st.write("Target")
475
+ show_text(prompt[1])
476
+ st.markdown("***")
477
+ else: # mode = Sourcing
478
+ st.markdown("## Prompt Creator")
479
+
480
+ #
481
+ # Create a new template or select an existing one
482
+ #
483
+ col1a, col1b, _, col2 = st.beta_columns([9, 9, 1, 6])
484
+
485
+ # current_templates_key and state.templates_key are keys for the templates object
486
+ current_templates_key = (dataset_key, conf_option.name if conf_option else None)
487
+
488
+ # Resets state if there has been a change in templates_key
489
+ if state.templates_key != current_templates_key:
490
+ state.templates_key = current_templates_key
491
+ reset_template_state()
492
+
493
+ with col1a, st.form("new_template_form"):
494
+ new_template_name = st.text_input(
495
+ "Create a New Prompt",
496
+ key="new_template",
497
+ value="",
498
+ help="Enter name and hit enter to create a new prompt.",
499
+ )
500
+ new_template_submitted = st.form_submit_button("Create")
501
+ if new_template_submitted:
502
+ if new_template_name in dataset_templates.all_template_names:
503
+ st.error(
504
+ f"A prompt with the name {new_template_name} already exists "
505
+ f"for dataset {state.templates_key}."
506
+ )
507
+ elif new_template_name == "":
508
+ st.error("Need to provide a prompt name.")
509
+ else:
510
+ template = Template(new_template_name, "", "")
511
+ dataset_templates.add_template(template)
512
+ reset_template_state()
513
+ state.template_name = new_template_name
514
+ else:
515
+ state.new_template_name = None
516
+
517
+ with col1b, st.beta_expander("or Select Prompt", expanded=True):
518
+ template_list = dataset_templates.all_template_names
519
+ if state.template_name:
520
+ index = template_list.index(state.template_name)
521
+ else:
522
+ index = 0
523
+ state.template_name = st.selectbox(
524
+ "", template_list, key="template_select", index=index, help="Select the prompt to work on."
525
+ )
526
+
527
+ if st.button("Delete Prompt", key="delete_prompt"):
528
+ dataset_templates.remove_template(state.template_name)
529
+ reset_template_state()
530
+
531
+ variety_guideline = """
532
+ :heavy_exclamation_mark::question:Creating a diverse set of prompts whose differences go beyond surface wordings (i.e. marginally changing 2 or 3 words) is highly encouraged.
533
+ Ultimately, the hope is that exposing the model to such a diversity will have a non-trivial impact on the model's robustness to the prompt formulation.
534
+ \r**To get various prompts, you can try moving the cursor along theses axes**:
535
+ \n- **Interrogative vs affirmative form**: Ask a question about an attribute of the inputs or tell the model to decide something about the input.
536
+ \n- **Task description localization**: where is the task description blended with the inputs? In the beginning, in the middle, at the end?
537
+ \n- **Implicit situation or contextualization**: how explicit is the query? For instance, *Given this review, would you buy this product?* is an indirect way to ask whether the review is positive.
538
+ """
539
+
540
+ col1, _, _ = st.beta_columns([18, 1, 6])
541
+ with col1:
542
+ if state.template_name is not None:
543
+ show_text(variety_guideline, with_markdown=True)
544
+
545
+ #
546
+ # Edit the created or selected template
547
+ #
548
+ col1, _, col2 = st.beta_columns([18, 1, 6])
549
+ with col1:
550
+ if state.template_name is not None:
551
+ template = dataset_templates[state.template_name]
552
+ #
553
+ # If template is selected, displays template editor
554
+ #
555
+ with st.form("edit_template_form"):
556
+ updated_template_name = st.text_input("Name", value=template.name)
557
+ state.reference = st.text_input(
558
+ "Prompt Reference",
559
+ help="Short description of the prompt and/or paper reference for the prompt.",
560
+ value=template.reference,
561
+ )
562
+
563
+ # Metadata
564
+ state.metadata = template.metadata
565
+ state.metadata.original_task = st.checkbox(
566
+ "Original Task?",
567
+ value=template.metadata.original_task,
568
+ help="Prompt asks model to perform the original task designed for this dataset.",
569
+ )
570
+ state.metadata.choices_in_prompt = st.checkbox(
571
+ "Choices in Template?",
572
+ value=template.metadata.choices_in_prompt,
573
+ help="Prompt explicitly lists choices in the template for the output.",
574
+ )
575
+
576
+ state.metadata.metrics = st.multiselect(
577
+ "Metrics",
578
+ sorted(METRICS),
579
+ default=template.metadata.metrics,
580
+ help="Select all metrics that are commonly used (or should "
581
+ "be used if a new task) to evaluate this prompt.",
582
+ )
583
+
584
+ state.metadata.languages = st.multiselect(
585
+ "Prompt Languages",
586
+ sorted(LANGUAGES.keys()),
587
+ default=template.metadata.languages,
588
+ format_func=format_language,
589
+ help="Select all languages used in this prompt. "
590
+ "This annotation is independent from the language(s) "
591
+ "of the dataset.",
592
+ )
593
+
594
+ # Answer choices
595
+ if template.get_answer_choices_expr() is not None:
596
+ answer_choices = template.get_answer_choices_expr()
597
+ else:
598
+ answer_choices = ""
599
+ state.answer_choices = st.text_input(
600
+ "Answer Choices",
601
+ value=answer_choices,
602
+ help="A Jinja expression for computing answer choices. "
603
+ "Separate choices with a triple bar (|||).",
604
+ )
605
+
606
+ # Jinja
607
+ state.jinja = st.text_area("Template", height=40, value=template.jinja)
608
+
609
+ # Submit form
610
+ if st.form_submit_button("Save"):
611
+ if (
612
+ updated_template_name in dataset_templates.all_template_names
613
+ and updated_template_name != state.template_name
614
+ ):
615
+ st.error(
616
+ f"A prompt with the name {updated_template_name} already exists "
617
+ f"for dataset {state.templates_key}."
618
+ )
619
+ elif updated_template_name == "":
620
+ st.error("Need to provide a prompt name.")
621
+ else:
622
+ # Parses state.answer_choices
623
+ if state.answer_choices == "":
624
+ updated_answer_choices = None
625
+ else:
626
+ updated_answer_choices = state.answer_choices
627
+
628
+ dataset_templates.update_template(
629
+ state.template_name,
630
+ updated_template_name,
631
+ state.jinja,
632
+ state.reference,
633
+ state.metadata,
634
+ updated_answer_choices,
635
+ )
636
+ # Update the state as well
637
+ state.template_name = updated_template_name
638
+ #
639
+ # Displays template output on current example if a template is selected
640
+ # (in second column)
641
+ #
642
+ with col2:
643
+ if state.template_name is not None:
644
+ st.empty()
645
+ template = dataset_templates[state.template_name]
646
+ prompt = template.apply(example)
647
+ if prompt == [""]:
648
+ st.write("∅∅∅ *Blank result*")
649
+ else:
650
+ st.write("Input")
651
+ show_text(prompt[0], width=40)
652
+ if len(prompt) > 1:
653
+ st.write("Target")
654
+ show_text(prompt[1], width=40)
655
+
656
+ #
657
+ # Must sync state at end
658
+ #
659
+ state.sync()
660
+
661
+
662
+ if __name__ == "__main__":
663
+ run_app()
promptsource/session.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Code for managing session state, which is needed for multi-input forms
3
+ # See https://github.com/streamlit/streamlit/issues/1557
4
+ #
5
+ # This code is taken from
6
+ # https://gist.github.com/okld/0aba4869ba6fdc8d49132e6974e2e662
7
+ #
8
+ from streamlit.hashing import _CodeHasher
9
+ from streamlit.report_thread import get_report_ctx
10
+ from streamlit.server.server import Server
11
+
12
+
13
+ class _SessionState:
14
+ def __init__(self, session, hash_funcs):
15
+ """Initialize SessionState instance."""
16
+ self.__dict__["_state"] = {
17
+ "data": {},
18
+ "hash": None,
19
+ "hasher": _CodeHasher(hash_funcs),
20
+ "is_rerun": False,
21
+ "session": session,
22
+ }
23
+
24
+ def __call__(self, **kwargs):
25
+ """Initialize state data once."""
26
+ for item, value in kwargs.items():
27
+ if item not in self._state["data"]:
28
+ self._state["data"][item] = value
29
+
30
+ def __getitem__(self, item):
31
+ """Return a saved state value, None if item is undefined."""
32
+ return self._state["data"].get(item, None)
33
+
34
+ def __getattr__(self, item):
35
+ """Return a saved state value, None if item is undefined."""
36
+ return self._state["data"].get(item, None)
37
+
38
+ def __setitem__(self, item, value):
39
+ """Set state value."""
40
+ self._state["data"][item] = value
41
+
42
+ def __setattr__(self, item, value):
43
+ """Set state value."""
44
+ self._state["data"][item] = value
45
+
46
+ def clear(self):
47
+ """Clear session state and request a rerun."""
48
+ self._state["data"].clear()
49
+ self._state["session"].request_rerun(None)
50
+
51
+ def sync(self):
52
+ """
53
+ Rerun the app with all state values up to date from the beginning to
54
+ fix rollbacks.
55
+ """
56
+ data_to_bytes = self._state["hasher"].to_bytes(self._state["data"], None)
57
+
58
+ # Ensure to rerun only once to avoid infinite loops
59
+ # caused by a constantly changing state value at each run.
60
+ #
61
+ # Example: state.value += 1
62
+ if self._state["is_rerun"]:
63
+ self._state["is_rerun"] = False
64
+
65
+ elif self._state["hash"] is not None:
66
+ if self._state["hash"] != data_to_bytes:
67
+ self._state["is_rerun"] = True
68
+ self._state["session"].request_rerun(None)
69
+
70
+ self._state["hash"] = data_to_bytes
71
+
72
+
73
+ def _get_session():
74
+ session_id = get_report_ctx().session_id
75
+ session_info = Server.get_current()._get_session_info(session_id)
76
+
77
+ if session_info is None:
78
+ raise RuntimeError("Couldn't get your Streamlit Session object.")
79
+
80
+ return session_info.session
81
+
82
+
83
+ def _get_state(hash_funcs=None):
84
+ session = _get_session()
85
+
86
+ if not hasattr(session, "_custom_session_state"):
87
+ session._custom_session_state = _SessionState(session, hash_funcs)
88
+
89
+ return session._custom_session_state
promptsource/templates.py ADDED
@@ -0,0 +1,731 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import os
3
+ import random
4
+ import uuid
5
+ from collections import Counter, defaultdict
6
+ from shutil import rmtree
7
+ from typing import Dict, List, Optional, Tuple
8
+
9
+ import pandas as pd
10
+ import pkg_resources
11
+ import yaml
12
+ from jinja2 import BaseLoader, Environment, meta
13
+
14
+
15
+ # Truncation of jinja template variables
16
+ # 1710 = 300 words x 4.7 avg characters per word + 300 spaces
17
+ TEXT_VAR_LENGTH = 2048
18
+
19
+ # Local path to the folder containing the templates
20
+ TEMPLATES_FOLDER_PATH = pkg_resources.resource_filename(__name__, "templates")
21
+
22
+ env = Environment(loader=BaseLoader)
23
+
24
+ # Allow the python function zip()
25
+ env.globals.update(zip=zip)
26
+
27
+ # These are users whose datasets should be included in the results returned by
28
+ # filter_english_datasets (regardless of their metadata)
29
+ INCLUDED_USERS = {"Zaid", "craffel"}
30
+
31
+ # These are the metrics with which templates can be tagged
32
+ METRICS = {
33
+ "BLEU",
34
+ "ROUGE",
35
+ "Squad",
36
+ "Trivia QA",
37
+ "Accuracy",
38
+ "Pearson Correlation",
39
+ "Spearman Correlation",
40
+ "MultiRC",
41
+ "AUC",
42
+ "COQA F1",
43
+ "Edit Distance",
44
+ "Mean Reciprocal Rank",
45
+ "Other",
46
+ }
47
+
48
+ # These are the languages with which templates can be tagged. Keys are ISO 639-1
49
+ # tags, which are the actual tags we use. Values are English names shown in the
50
+ # UI for convenience.
51
+ LANGUAGES = {
52
+ "ab": "Abkhazian",
53
+ "aa": "Afar",
54
+ "af": "Afrikaans",
55
+ "ak": "Akan",
56
+ "sq": "Albanian",
57
+ "am": "Amharic",
58
+ "ar": "Arabic",
59
+ "an": "Aragonese",
60
+ "hy": "Armenian",
61
+ "as": "Assamese",
62
+ "av": "Avaric",
63
+ "ae": "Avestan",
64
+ "ay": "Aymara",
65
+ "az": "Azerbaijani",
66
+ "bm": "Bambara",
67
+ "ba": "Bashkir",
68
+ "eu": "Basque",
69
+ "be": "Belarusian",
70
+ "bn": "Bengali",
71
+ "bi": "Bislama",
72
+ "bs": "Bosnian",
73
+ "br": "Breton",
74
+ "bg": "Bulgarian",
75
+ "my": "Burmese",
76
+ "ca": "Catalan, Valencian",
77
+ "ch": "Chamorro",
78
+ "ce": "Chechen",
79
+ "ny": "Chichewa, Chewa, Nyanja",
80
+ "zh": "Chinese",
81
+ "cu": "Church Slavic, Old Slavonic, Church Slavonic, Old Bulgarian, Old Church Slavonic",
82
+ "cv": "Chuvash",
83
+ "kw": "Cornish",
84
+ "co": "Corsican",
85
+ "cr": "Cree",
86
+ "hr": "Croatian",
87
+ "cs": "Czech",
88
+ "da": "Danish",
89
+ "dv": "Divehi, Dhivehi, Maldivian",
90
+ "nl": "Dutch, Flemish",
91
+ "dz": "Dzongkha",
92
+ "en": "English",
93
+ "eo": "Esperanto",
94
+ "et": "Estonian",
95
+ "ee": "Ewe",
96
+ "fo": "Faroese",
97
+ "fj": "Fijian",
98
+ "fi": "Finnish",
99
+ "fr": "French",
100
+ "fy": "Western Frisian",
101
+ "ff": "Fulah",
102
+ "gd": "Gaelic, Scottish Gaelic",
103
+ "gl": "Galician",
104
+ "lg": "Ganda",
105
+ "ka": "Georgian",
106
+ "de": "German",
107
+ "el": "Greek, Modern (1453–)",
108
+ "kl": "Kalaallisut, Greenlandic",
109
+ "gn": "Guarani",
110
+ "gu": "Gujarati",
111
+ "ht": "Haitian, Haitian Creole",
112
+ "ha": "Hausa",
113
+ "he": "Hebrew",
114
+ "hz": "Herero",
115
+ "hi": "Hindi",
116
+ "ho": "Hiri Motu",
117
+ "hu": "Hungarian",
118
+ "is": "Icelandic",
119
+ "io": "Ido",
120
+ "ig": "Igbo",
121
+ "id": "Indonesian",
122
+ "ia": "Interlingua (International Auxiliary Language Association)",
123
+ "ie": "Interlingue, Occidental",
124
+ "iu": "Inuktitut",
125
+ "ik": "Inupiaq",
126
+ "ga": "Irish",
127
+ "it": "Italian",
128
+ "ja": "Japanese",
129
+ "jv": "Javanese",
130
+ "kn": "Kannada",
131
+ "kr": "Kanuri",
132
+ "ks": "Kashmiri",
133
+ "kk": "Kazakh",
134
+ "km": "Central Khmer",
135
+ "ki": "Kikuyu, Gikuyu",
136
+ "rw": "Kinyarwanda",
137
+ "ky": "Kirghiz, Kyrgyz",
138
+ "kv": "Komi",
139
+ "kg": "Kongo",
140
+ "ko": "Korean",
141
+ "kj": "Kuanyama, Kwanyama",
142
+ "ku": "Kurdish",
143
+ "lo": "Lao",
144
+ "la": "Latin",
145
+ "lv": "Latvian",
146
+ "li": "Limburgan, Limburger, Limburgish",
147
+ "ln": "Lingala",
148
+ "lt": "Lithuanian",
149
+ "lu": "Luba-Katanga",
150
+ "lb": "Luxembourgish, Letzeburgesch",
151
+ "mk": "Macedonian",
152
+ "mg": "Malagasy",
153
+ "ms": "Malay",
154
+ "ml": "Malayalam",
155
+ "mt": "Maltese",
156
+ "gv": "Manx",
157
+ "mi": "Maori",
158
+ "mr": "Marathi",
159
+ "mh": "Marshallese",
160
+ "mn": "Mongolian",
161
+ "na": "Nauru",
162
+ "nv": "Navajo, Navaho",
163
+ "nd": "North Ndebele",
164
+ "nr": "South Ndebele",
165
+ "ng": "Ndonga",
166
+ "ne": "Nepali",
167
+ "no": "Norwegian",
168
+ "nb": "Norwegian Bokmål",
169
+ "nn": "Norwegian Nynorsk",
170
+ "ii": "Sichuan Yi, Nuosu",
171
+ "oc": "Occitan",
172
+ "oj": "Ojibwa",
173
+ "or": "Oriya",
174
+ "om": "Oromo",
175
+ "os": "Ossetian, Ossetic",
176
+ "pi": "Pali",
177
+ "ps": "Pashto, Pushto",
178
+ "fa": "Persian",
179
+ "pl": "Polish",
180
+ "pt": "Portuguese",
181
+ "pa": "Punjabi, Panjabi",
182
+ "qu": "Quechua",
183
+ "ro": "Romanian, Moldavian, Moldovan",
184
+ "rm": "Romansh",
185
+ "rn": "Rundi",
186
+ "ru": "Russian",
187
+ "se": "Northern Sami",
188
+ "sm": "Samoan",
189
+ "sg": "Sango",
190
+ "sa": "Sanskrit",
191
+ "sc": "Sardinian",
192
+ "sr": "Serbian",
193
+ "sn": "Shona",
194
+ "sd": "Sindhi",
195
+ "si": "Sinhala, Sinhalese",
196
+ "sk": "Slovak",
197
+ "sl": "Slovenian",
198
+ "so": "Somali",
199
+ "st": "Southern Sotho",
200
+ "es": "Spanish, Castilian",
201
+ "su": "Sundanese",
202
+ "sw": "Swahili",
203
+ "ss": "Swati",
204
+ "sv": "Swedish",
205
+ "tl": "Tagalog",
206
+ "ty": "Tahitian",
207
+ "tg": "Tajik",
208
+ "ta": "Tamil",
209
+ "tt": "Tatar",
210
+ "te": "Telugu",
211
+ "th": "Thai",
212
+ "bo": "Tibetan",
213
+ "ti": "Tigrinya",
214
+ "to": "Tonga (Tonga Islands)",
215
+ "ts": "Tsonga",
216
+ "tn": "Tswana",
217
+ "tr": "Turkish",
218
+ "tk": "Turkmen",
219
+ "tw": "Twi",
220
+ "ug": "Uighur, Uyghur",
221
+ "uk": "Ukrainian",
222
+ "ur": "Urdu",
223
+ "uz": "Uzbek",
224
+ "ve": "Venda",
225
+ "vi": "Vietnamese",
226
+ "vo": "Volapük",
227
+ "wa": "Walloon",
228
+ "cy": "Welsh",
229
+ "wo": "Wolof",
230
+ "xh": "Xhosa",
231
+ "yi": "Yiddish",
232
+ "yo": "Yoruba",
233
+ "za": "Zhuang, Chuang",
234
+ "zu": "Zulu",
235
+ }
236
+
237
+
238
+ def highlight(input):
239
+ return "<span style='color: #F08080'>" + input + "</span>"
240
+
241
+
242
+ def choice(choices):
243
+ return random.choice(choices)
244
+
245
+
246
+ def most_frequent(items):
247
+ """Returns the set of items which appear most frequently in the input"""
248
+ if not items:
249
+ return
250
+ item_counts = Counter(items).most_common()
251
+ max_freq = item_counts[0][1]
252
+ most_frequent_items = [c[0] for c in item_counts if c[1] == max_freq]
253
+ return most_frequent_items
254
+
255
+
256
+ env.filters["highlight"] = highlight
257
+ env.filters["choice"] = choice
258
+ env.filters["most_frequent"] = most_frequent
259
+
260
+
261
+ class Template(yaml.YAMLObject):
262
+ """
263
+ A prompt template.
264
+ """
265
+
266
+ yaml_tag = "!Template"
267
+
268
+ def __init__(self, name, jinja, reference, metadata=None, answer_choices=None):
269
+ """
270
+ Creates a prompt template.
271
+
272
+ A prompt template is expressed in Jinja. It is rendered using an example
273
+ from the corresponding Hugging Face datasets library (a dictionary). The
274
+ separator ||| should appear once to divide the template into prompt and
275
+ output. Generally, the prompt should provide information on the desired
276
+ behavior, e.g., text passage and instructions, and the output should be
277
+ a desired response.
278
+
279
+ :param name: unique name (per dataset) for template
280
+ :param jinja: template expressed in Jinja
281
+ :param reference: string describing author or paper reference for template
282
+ :param metadata: a Metadata object with template annotations
283
+ :param answer_choices: Jinja expression for answer choices. Should produce
284
+ a ||| delimited string of choices that enumerates
285
+ the possible completions for templates that should
286
+ be evaluated as ranked completions. If None, then
287
+ the template is open-ended. This list is accessible
288
+ from within Jinja as the variable `answer_choices`.
289
+ """
290
+ self.id = str(uuid.uuid4())
291
+ self.name = name
292
+ self.jinja = jinja
293
+ self.reference = reference
294
+ self.metadata = metadata if metadata is not None else Template.Metadata()
295
+ self.answer_choices = answer_choices
296
+
297
+ def get_id(self):
298
+ """
299
+ Returns the id of the template
300
+
301
+ :return: unique id for template
302
+ """
303
+ return self.id
304
+
305
+ def get_name(self):
306
+ """
307
+ Returns the name of the template
308
+
309
+ :return: unique (per dataset) name for template
310
+ """
311
+ return self.name
312
+
313
+ def get_reference(self):
314
+ """
315
+ Returns the bibliographic reference (or author) for the template
316
+
317
+ :return: reference as a string
318
+ """
319
+ return self.reference
320
+
321
+ def get_answer_choices_expr(self):
322
+ """
323
+ Returns a Jinja expression for computing the answer choices from an example.
324
+
325
+ :return: String, or None if no answer choices
326
+ """
327
+ return self.answer_choices
328
+
329
+ def get_answer_choices_list(self, example):
330
+ """
331
+ Returns a list of answer choices for a given example
332
+
333
+ :return: list of strings, or None if get_answer_choices_expr is None
334
+ """
335
+ jinja = self.get_answer_choices_expr()
336
+ if jinja is None:
337
+ return None
338
+
339
+ rtemplate = env.from_string(jinja)
340
+ protected_example = self._escape_pipe(example)
341
+ rendered_choices = rtemplate.render(**protected_example)
342
+ return [self._unescape_pipe(answer_choice.strip()) for answer_choice in rendered_choices.split("|||")]
343
+
344
+ def get_fixed_answer_choices_list(self):
345
+ """
346
+ Returns a list of answer choices that is static across examples, if possible
347
+ :return: list of strings, or None if no static list exists
348
+ """
349
+ jinja = self.get_answer_choices_expr()
350
+ if jinja is None:
351
+ return None
352
+
353
+ parse = env.parse(jinja)
354
+ variables = meta.find_undeclared_variables(parse)
355
+ if len(variables) == 0:
356
+ rtemplate = env.from_string(jinja)
357
+ rendered_choices = rtemplate.render()
358
+ return [answer_choice.strip() for answer_choice in rendered_choices.split("|||")]
359
+ else:
360
+ return None
361
+
362
+ def apply(self, example, truncate=True, highlight_variables=False):
363
+ """
364
+ Creates a prompt by applying this template to an example
365
+
366
+ :param example: the dataset example to create a prompt for
367
+ :param truncate: if True, example fields will be truncated to TEXT_VAR_LENGTH chars
368
+ :param highlight_variables: highlight the added variables
369
+ :return: tuple of 2 strings, for prompt and output
370
+ """
371
+ jinja = self.jinja
372
+
373
+ # Truncates the prompt if needed
374
+ if truncate:
375
+ trunc_command = (
376
+ f" | string | truncate({TEXT_VAR_LENGTH}) }}}}" # Escaping curly braces requires doubling them
377
+ )
378
+ jinja = jinja.replace("}}", trunc_command)
379
+
380
+ # Highlights text that was substituted for variables, if requested
381
+ if highlight_variables:
382
+ jinja = jinja.replace("}}", " | highlight }}")
383
+ rtemplate = env.from_string(jinja)
384
+
385
+ protected_example = self._escape_pipe(example)
386
+
387
+ # Adds in answer_choices variable
388
+ if "answer_choices" in protected_example:
389
+ raise ValueError("Example contains the restricted key 'answer_choices'.")
390
+
391
+ protected_example["answer_choices"] = self.get_answer_choices_list(example)
392
+
393
+ # Renders the Jinja template
394
+ rendered_example = rtemplate.render(**protected_example)
395
+
396
+ # Splits on the separator, and then replaces back any occurrences of the
397
+ # separator in the original example
398
+ return [self._unescape_pipe(part).strip() for part in rendered_example.split("|||")]
399
+
400
+ pipe_protector = "3ed2dface8203c4c9dfb1a5dc58e41e0"
401
+
402
+ @classmethod
403
+ def _escape_pipe(cls, example):
404
+ # Replaces any occurrences of the "|||" separator in the example, which
405
+ # which will be replaced back after splitting
406
+ protected_example = {
407
+ key: value.replace("|||", cls.pipe_protector) if isinstance(value, str) else value
408
+ for key, value in example.items()
409
+ }
410
+ return protected_example
411
+
412
+ @classmethod
413
+ def _unescape_pipe(cls, string):
414
+ # replaces back any occurrences of the separator in a string
415
+ return string.replace(cls.pipe_protector, "|||")
416
+
417
+ class Metadata(yaml.YAMLObject):
418
+ """
419
+ Metadata for a prompt template.
420
+ """
421
+
422
+ yaml_tag = "!TemplateMetadata"
423
+
424
+ def __init__(
425
+ self,
426
+ original_task: Optional[bool] = None,
427
+ choices_in_prompt: Optional[bool] = None,
428
+ metrics: Optional[List[str]] = None,
429
+ languages: Optional[List[str]] = None,
430
+ ):
431
+ """
432
+ Initializes template metadata.
433
+
434
+ In the following, trivial choices are defined as Yes/No, True/False,
435
+ etc. and nontrivial choices are other types of choices denoted in
436
+ the answer_choices field.
437
+
438
+ :param original_task: If True, this prompt asks a model to perform the original task designed for
439
+ this dataset.
440
+ :param choices_in_prompt: If True, the answer choices are included in the templates such that models
441
+ see those choices in the input. Only applicable to classification tasks.
442
+ :param metrics: List of strings denoting metrics to use for evaluation
443
+ :param metrics: List of strings denoting languages used in the prompt (not the associated dataset!)
444
+ """
445
+ self.original_task = original_task
446
+ self.choices_in_prompt = choices_in_prompt
447
+ self.metrics = metrics
448
+ self.languages = languages
449
+
450
+
451
+ class TemplateCollection:
452
+ """
453
+ This helper class wraps the DatasetTemplates class
454
+ - Initialized the DatasetTemplates for all existing template folder
455
+ - Give access to each DatasetTemplates
456
+ - Provides aggregated counts over all DatasetTemplates
457
+ """
458
+
459
+ def __init__(self):
460
+
461
+ # Dict of all the DatasetTemplates, key is the tuple (dataset_name, subset_name)
462
+ self.datasets_templates: Dict[(str, Optional[str]), DatasetTemplates] = self._collect_datasets()
463
+
464
+ @property
465
+ def keys(self):
466
+ return list(self.datasets_templates.keys())
467
+
468
+ def __len__(self) -> int:
469
+ return len(self.datasets_templates)
470
+
471
+ def remove(self, dataset_name: str, subset_name: Optional[str] = None) -> None:
472
+ del self.datasets_templates[dataset_name, subset_name]
473
+
474
+ def _collect_datasets(self) -> Dict[Tuple[str, str], "DatasetTemplates"]:
475
+ """
476
+ Initialize a DatasetTemplates object for each templates.yaml detected in the templates folder
477
+
478
+ Returns: a dict with key=(dataset_name, subset_name)
479
+ """
480
+ dataset_folders = os.listdir(TEMPLATES_FOLDER_PATH)
481
+ dataset_folders = [folder for folder in dataset_folders if not folder.startswith(".")]
482
+
483
+ output = {} # format is {(dataset_name, subset_name): DatasetsTemplates}
484
+ for dataset in dataset_folders:
485
+ if dataset in INCLUDED_USERS:
486
+ for filename in os.listdir(os.path.join(TEMPLATES_FOLDER_PATH, dataset)):
487
+ output = {**output, **self._collect_dataset(dataset + "/" + filename)}
488
+ else:
489
+ output = {**output, **self._collect_dataset(dataset)}
490
+ return output
491
+
492
+ def _collect_dataset(self, dataset):
493
+ output = {} # format is {(dataset_name, subset_name): DatasetsTemplates}
494
+ for filename in os.listdir(os.path.join(TEMPLATES_FOLDER_PATH, dataset)):
495
+ if filename.endswith(".yaml"):
496
+ # If there is no sub-folder, there is no subset for this dataset
497
+ output[(dataset, None)] = DatasetTemplates(dataset)
498
+ else:
499
+ # This is a subfolder, and its name corresponds to the subset name
500
+ output[(dataset, filename)] = DatasetTemplates(dataset_name=dataset, subset_name=filename)
501
+ return output
502
+
503
+ def get_dataset(self, dataset_name: str, subset_name: Optional[str] = None) -> "DatasetTemplates":
504
+ """
505
+ Return the DatasetTemplates object corresponding to the dataset name
506
+
507
+ :param dataset_name: name of the dataset to get
508
+ :param subset_name: name of the subset
509
+ """
510
+ # if the dataset does not exist, we add it
511
+ if dataset_name not in self.keys:
512
+ self.datasets_templates[(dataset_name, subset_name)] = DatasetTemplates(dataset_name, subset_name)
513
+
514
+ return self.datasets_templates[(dataset_name, subset_name)]
515
+
516
+ def get_templates_count(self) -> Dict:
517
+ """
518
+ Return the overall number count over all datasets
519
+
520
+ NB: we don't breakdown datasets into subsets for the count, i.e subsets count are included
521
+ into the dataset count
522
+ """
523
+
524
+ count_dict = defaultdict(int)
525
+ for k, v in self.datasets_templates.items():
526
+ # Subsets count towards dataset count
527
+ count_dict[k[0]] += len(v)
528
+ # converting to regular dict
529
+ return dict(count_dict)
530
+
531
+
532
+ class DatasetTemplates:
533
+ """
534
+ Class that wraps all templates for a specific dataset/subset and implements all the helper
535
+ functions necessary to read/write to the yaml file
536
+ """
537
+
538
+ TEMPLATES_KEY = "templates"
539
+ DATASET_KEY = "dataset"
540
+ SUBSET_KEY = "subset"
541
+ TEMPLATE_FILENAME = "templates.yaml"
542
+
543
+ def __init__(self, dataset_name: str, subset_name: str = None):
544
+ self.dataset_name: str = dataset_name
545
+ self.subset_name: str = subset_name
546
+ # dictionary is keyed by template name.
547
+ self.templates: Dict = self.read_from_file()
548
+
549
+ # Mapping from template name to template id
550
+ self.name_to_id_mapping = {}
551
+ self.sync_mapping()
552
+
553
+ def sync_mapping(self) -> None:
554
+ """
555
+ Re-compute the name_to_id_mapping to ensure it is in sync with self.templates
556
+ """
557
+ self.name_to_id_mapping = {template.name: template.id for template in self.templates.values()}
558
+
559
+ @property
560
+ def all_template_names(self) -> List[str]:
561
+ """
562
+ Sorted list of all templates names for this dataset
563
+ """
564
+ return sorted([template.name for template in self.templates.values()])
565
+
566
+ @property
567
+ def folder_path(self) -> str:
568
+ if self.subset_name:
569
+ return os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name, self.subset_name)
570
+ else:
571
+ return os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name)
572
+
573
+ @property
574
+ def yaml_path(self) -> str:
575
+ return os.path.join(self.folder_path, self.TEMPLATE_FILENAME)
576
+
577
+ def format_for_dump(self) -> Dict:
578
+ """
579
+ Create a formatted dictionary for the class attributes
580
+ """
581
+ formatted_dict = {self.DATASET_KEY: self.dataset_name, self.TEMPLATES_KEY: self.templates}
582
+ if self.subset_name:
583
+ formatted_dict[self.SUBSET_KEY] = self.subset_name
584
+ return formatted_dict
585
+
586
+ def read_from_file(self) -> Dict:
587
+ """
588
+ Reads a file containing a prompt collection.
589
+ """
590
+
591
+ if not os.path.exists(self.yaml_path):
592
+ dataset_name = f"{self.dataset_name} {self.subset_name}" if self.subset_name else self.dataset_name
593
+ logging.warning(
594
+ f"Tried instantiating `DatasetTemplates` for {dataset_name}, but no prompts found. "
595
+ "Please ignore this warning if you are creating new prompts for this dataset."
596
+ )
597
+ return {}
598
+ yaml_dict = yaml.load(open(self.yaml_path, "r"), Loader=yaml.FullLoader)
599
+ return yaml_dict[self.TEMPLATES_KEY]
600
+
601
+ def write_to_file(self) -> None:
602
+ """
603
+ Writes to a file with the current prompt collection.
604
+ """
605
+ # Sync the mapping
606
+ self.sync_mapping()
607
+
608
+ # We only create the folder if a template is written
609
+ if not os.path.exists(self.folder_path):
610
+ os.makedirs(self.folder_path)
611
+ yaml.dump(self.format_for_dump(), open(self.yaml_path, "w"))
612
+
613
+ def add_template(self, template: "Template") -> None:
614
+ """
615
+ Adds a new template for the dataset
616
+
617
+ :param template: template
618
+ """
619
+ self.templates[template.get_id()] = template
620
+
621
+ self.write_to_file()
622
+
623
+ def remove_template(self, template_name: str) -> None:
624
+ """
625
+ Deletes a template
626
+
627
+ :param template_name: name of template to remove
628
+ """
629
+
630
+ # Even if we have an ID, we want to check for duplicate names
631
+ if template_name not in self.all_template_names:
632
+ raise ValueError(f"No template with name {template_name} for dataset {self.dataset_name} exists.")
633
+
634
+ del self.templates[self.name_to_id_mapping[template_name]]
635
+
636
+ if len(self.templates) == 0:
637
+ # There is no remaining template, we can remove the entire folder
638
+ self.delete_folder()
639
+ else:
640
+ # We just update the file
641
+ self.write_to_file()
642
+
643
+ def update_template(
644
+ self,
645
+ current_template_name: str,
646
+ new_template_name: str,
647
+ jinja: str,
648
+ reference: str,
649
+ metadata: Template.Metadata,
650
+ answer_choices: str,
651
+ ) -> None:
652
+ """
653
+ Updates a pre-existing template and writes changes
654
+
655
+ :param current_template_name: current name of the template stored in self.templates
656
+ :param new_template_name: new name for the template
657
+ :param jinja: new jinja entry
658
+ :param reference: new reference entry
659
+ :param metadata: a Metadata object with template annotations
660
+ :param answer_choices: new answer_choices string
661
+ """
662
+ template_id = self.name_to_id_mapping[current_template_name]
663
+ self.templates[template_id].name = new_template_name
664
+ self.templates[template_id].jinja = jinja
665
+ self.templates[template_id].reference = reference
666
+ self.templates[template_id].metadata = metadata
667
+ self.templates[template_id].answer_choices = answer_choices
668
+
669
+ self.write_to_file()
670
+
671
+ def delete_folder(self) -> None:
672
+ """
673
+ Delete the folder corresponding to self.folder_path
674
+ """
675
+ self.sync_mapping()
676
+
677
+ rmtree(self.folder_path)
678
+
679
+ # If it is a subset, we have to check whether to remove the dataset folder
680
+ if self.subset_name:
681
+ # have to check for other folders
682
+ base_dataset_folder = os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name)
683
+ if len(os.listdir(base_dataset_folder)) == 0:
684
+ rmtree(base_dataset_folder)
685
+
686
+ def __getitem__(self, template_key: str) -> "Template":
687
+ return self.templates[self.name_to_id_mapping[template_key]]
688
+
689
+ def __len__(self) -> int:
690
+ return len(self.templates)
691
+
692
+
693
+ def get_templates_data_frame():
694
+ """
695
+ Gathers all template information into a Pandas DataFrame.
696
+
697
+ :return: Pandas DataFrame
698
+ """
699
+ data = {
700
+ "id": [],
701
+ "dataset": [],
702
+ "subset": [],
703
+ "name": [],
704
+ "reference": [],
705
+ "original_task": [],
706
+ "choices_in_prompt": [],
707
+ "metrics": [],
708
+ "languages": [],
709
+ "answer_choices": [],
710
+ "jinja": [],
711
+ }
712
+
713
+ template_collection = TemplateCollection()
714
+
715
+ for key in template_collection.keys:
716
+ templates = template_collection.get_dataset(key[0], key[1])
717
+ for template_name in templates.all_template_names:
718
+ template = templates[template_name]
719
+ data["id"].append(template.get_id())
720
+ data["dataset"].append(key[0])
721
+ data["subset"].append(key[1])
722
+ data["name"].append(template.get_name())
723
+ data["reference"].append(template.get_reference())
724
+ data["original_task"].append(template.metadata.original_task)
725
+ data["choices_in_prompt"].append(template.metadata.choices_in_prompt)
726
+ data["metrics"].append(template.metadata.metrics)
727
+ data["languages"].append(template.metadata.languages)
728
+ data["answer_choices"].append(template.get_answer_choices_expr())
729
+ data["jinja"].append(template.jinja)
730
+
731
+ return pd.DataFrame(data)
promptsource/templates/Zaid/coqa_expanded/templates.yaml ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: Zaid/coqa_expanded
2
+ templates:
3
+ 12ad4331-d063-4b56-b0f6-76f59c690717: !Template
4
+ answer_choices: null
5
+ id: 12ad4331-d063-4b56-b0f6-76f59c690717
6
+ jinja: "Below is a passage, followed by a series of questions and answers about\
7
+ \ the passage. Answer the last question based on the information contained in\
8
+ \ the passage. If there is no answer in the passage, say \"unknown\".\n\nPassage:\
9
+ \ {{story}}\n\nQ: {{question}} \nA: ||| {% if answer[\"answer_start\"] != -1\
10
+ \ %}\n{{answer[\"input_text\"]}}\n{% else %}\nunknown\n{% endif %}"
11
+ metadata: !TemplateMetadata
12
+ choices_in_prompt: false
13
+ languages:
14
+ - en
15
+ metrics:
16
+ - Other
17
+ original_task: true
18
+ name: Verbose instructions
19
+ reference: 'Metric: variant of SQuAD (Section 6.1 of the paper)'
20
+ 2f9fb20d-f4c9-4371-9cd4-db47607cb7a3: !Template
21
+ answer_choices: null
22
+ id: 2f9fb20d-f4c9-4371-9cd4-db47607cb7a3
23
+ jinja: "What is the answer to the last question in the dialogue below? If there\
24
+ \ is no answer in the passage, say \"unknown\".\n\nPassage: {{story}}\n\nQ:\
25
+ \ {{question}} \nA: ||| {% if answer[\"answer_start\"] != -1 %}\n{{answer[\"\
26
+ input_text\"]}}\n{% else %}\nunknown\n{% endif %}"
27
+ metadata: !TemplateMetadata
28
+ choices_in_prompt: false
29
+ languages:
30
+ - en
31
+ metrics:
32
+ - Other
33
+ original_task: true
34
+ name: What is the answer
35
+ reference: 'Metric: variant of SQuAD (Section 6.1 of the paper)'
36
+ 9aff8967-d41c-4d79-8ef4-fc3650773735: !Template
37
+ answer_choices: null
38
+ id: 9aff8967-d41c-4d79-8ef4-fc3650773735
39
+ jinja: "Complete the dialogue based on the information contained in the passage.\
40
+ \ If there is no answer in the passage, say \"unknown\".\n\nPassage: {{story}}\n\
41
+ \nQ: {{question}} \nA: ||| {% if answer[\"answer_start\"] != -1 %}\n{{answer[\"\
42
+ input_text\"]}}\n{% else %}\nunknown\n{% endif %}"
43
+ metadata: !TemplateMetadata
44
+ choices_in_prompt: false
45
+ languages:
46
+ - en
47
+ metrics:
48
+ - Other
49
+ original_task: true
50
+ name: Complete the dialogue
51
+ reference: 'Metric: variant of SQuAD (Section 6.1 of the paper)'
52
+ 9bc32f2e-eee6-4006-bce3-74a79403d33e: !Template
53
+ answer_choices: null
54
+ id: 9bc32f2e-eee6-4006-bce3-74a79403d33e
55
+ jinja: "Answer the last question based on the information contained in the passage.\
56
+ \ If there is no answer in the passage, say \"unknown\".\n\nPassage: {{story}}\n\
57
+ \nQ: {{question}} \nA: ||| {% if answer[\"answer_start\"] != -1 %}\n{{answer[\"\
58
+ input_text\"]}}\n{% else %}\nunknown\n{% endif %}"
59
+ metadata: !TemplateMetadata
60
+ choices_in_prompt: false
61
+ languages:
62
+ - en
63
+ metrics:
64
+ - Other
65
+ original_task: true
66
+ name: Answer the last question
67
+ reference: 'Metric: variant of SQuAD (Section 6.1 of the paper)'
68
+ bacb6534-e607-4afc-a412-ccfcd9fe38e2: !Template
69
+ answer_choices: null
70
+ id: bacb6534-e607-4afc-a412-ccfcd9fe38e2
71
+ jinja: 'In the passage below, extract the part which answers the last question.
72
+ If there is no answer in the passage, say "unknown".
73
+
74
+
75
+ Passage: {{story}}
76
+
77
+
78
+ Q: {{question}}
79
+
80
+ A: |||
81
+
82
+ {% if answer["answer_start"] != -1 %}
83
+
84
+ {{story[answer["answer_start"] : answer["answer_end"] ]}}
85
+
86
+ {% else %}
87
+
88
+ unknown
89
+
90
+ {% endif %}'
91
+ metadata: !TemplateMetadata
92
+ choices_in_prompt: false
93
+ languages:
94
+ - en
95
+ metrics:
96
+ - Squad
97
+ original_task: false
98
+ name: extract_answer
99
+ reference: ''
100
+ be39974f-aa86-4076-b444-bd3c2732b17b: !Template
101
+ answer_choices: null
102
+ id: be39974f-aa86-4076-b444-bd3c2732b17b
103
+ jinja: "Help me complete the dialogue about this passage. If there is no answer\
104
+ \ in the passage, say \"unknown\".\n\nPassage: {{story}}\n\nQ: {{question}}\
105
+ \ \nA: ||| {% if answer[\"answer_start\"] != -1 %}\n{{answer[\"input_text\"\
106
+ ]}}\n{% else %}\nunknown\n{% endif %}"
107
+ metadata: !TemplateMetadata
108
+ choices_in_prompt: false
109
+ languages:
110
+ - en
111
+ metrics:
112
+ - Other
113
+ original_task: true
114
+ name: Help me
115
+ reference: 'Metric: variant of SQuAD (Section 6.1 of the paper)'
116
+ d95440ce-d538-40f8-ae09-664e05852ca8: !Template
117
+ answer_choices: null
118
+ id: d95440ce-d538-40f8-ae09-664e05852ca8
119
+ jinja: "{{story}}\n\nQ: {{question}} \nA: ||| {% if answer[\"answer_start\"] !=\
120
+ \ -1 %}\n{{answer[\"input_text\"]}}\n{% else %}\nunknown\n{% endif %}"
121
+ metadata: !TemplateMetadata
122
+ choices_in_prompt: false
123
+ languages:
124
+ - en
125
+ metrics:
126
+ - Other
127
+ original_task: true
128
+ name: GPT-3 Style
129
+ reference: 'Brown et al. NeurIPS 2020. Metric: variant of SQuAD (Section 6.1 of
130
+ the paper)'
promptsource/templates/Zaid/quac_expanded/templates.yaml ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: Zaid/quac_expanded
2
+ templates:
3
+ 01d8c949-89a7-4a44-9a39-6cf2ac3e0a7b: !Template
4
+ answer_choices: null
5
+ id: 01d8c949-89a7-4a44-9a39-6cf2ac3e0a7b
6
+ jinja: "What is the answer to the last question in the dialogue below? If there\
7
+ \ is no answer in the passage, say \"unknown\".\n\nPassage: {{context}}\n\n\
8
+ Q: {{question}} \nA: ||| {{answer[\"texts\"][0]}}"
9
+ metadata: !TemplateMetadata
10
+ choices_in_prompt: false
11
+ languages:
12
+ - en
13
+ metrics:
14
+ - Other
15
+ original_task: true
16
+ name: What is the answer
17
+ reference: 'Metric: F1'
18
+ 1484c6e6-bf42-47ca-9ea7-c3c552a24de1: !Template
19
+ answer_choices: null
20
+ id: 1484c6e6-bf42-47ca-9ea7-c3c552a24de1
21
+ jinja: "{{context}}\n\nQ: {{question}} \nA: ||| {{answer[\"texts\"][0]}}"
22
+ metadata: !TemplateMetadata
23
+ choices_in_prompt: false
24
+ languages:
25
+ - en
26
+ metrics:
27
+ - Other
28
+ original_task: true
29
+ name: GPT-3 Style
30
+ reference: 'Brown et al. NeurIPS 2020. Metric: F1'
31
+ 2bca0532-01a3-4a64-a228-a57ae0965719: !Template
32
+ answer_choices: null
33
+ id: 2bca0532-01a3-4a64-a228-a57ae0965719
34
+ jinja: "Below is a passage, followed by a series of questions and answers about\
35
+ \ the passage. Answer the last question based on the information contained in\
36
+ \ the passage. If there is no answer in the passage, say \"unknown\".\n\nPassage:\
37
+ \ {{context}}\n\nQ: {{question}} \nA: ||| {{answer[\"texts\"][0]}}"
38
+ metadata: !TemplateMetadata
39
+ choices_in_prompt: false
40
+ languages:
41
+ - en
42
+ metrics:
43
+ - Other
44
+ original_task: true
45
+ name: Verbose instructions
46
+ reference: 'Metric: F1'
47
+ 4abd0379-dbc0-4f71-901b-dd0af3581157: !Template
48
+ answer_choices: null
49
+ id: 4abd0379-dbc0-4f71-901b-dd0af3581157
50
+ jinja: "Answer the last question based on the information contained in the passage.\
51
+ \ If there is no answer in the passage, say \"unknown\".\n\nPassage: {{context}}\n\
52
+ \nQ: {{question}} \nA: ||| {{answer[\"texts\"][0]}}"
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: false
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - Other
59
+ original_task: true
60
+ name: Answer the last question
61
+ reference: 'Metric: F1'
62
+ 8ebbd098-b40c-4e69-8cbb-0ffecf0fe2a6: !Template
63
+ answer_choices: null
64
+ id: 8ebbd098-b40c-4e69-8cbb-0ffecf0fe2a6
65
+ jinja: "Complete the dialogue based on the information contained in the passage.\
66
+ \ If there is no answer in the passage, say \"unknown\".\n\nPassage: {{context}}\n\
67
+ \nQ: {{question}} \nA: ||| {{answer[\"texts\"][0]}}"
68
+ metadata: !TemplateMetadata
69
+ choices_in_prompt: false
70
+ languages:
71
+ - en
72
+ metrics:
73
+ - Other
74
+ original_task: true
75
+ name: Complete the dialogue
76
+ reference: 'Metric: F1'
77
+ e624695b-5d26-47cc-bdb4-ac2bee4ddaea: !Template
78
+ answer_choices: null
79
+ id: e624695b-5d26-47cc-bdb4-ac2bee4ddaea
80
+ jinja: "Help me complete the dialogue about this passage. If there is no answer\
81
+ \ in the passage, say \"unknown\".\n\nPassage: {{context}}\n\nQ: {{question}}\
82
+ \ \nA: ||| {{answer[\"texts\"][0]}}"
83
+ metadata: !TemplateMetadata
84
+ choices_in_prompt: false
85
+ languages:
86
+ - en
87
+ metrics:
88
+ - Other
89
+ original_task: true
90
+ name: Help me
91
+ reference: 'Metric: F1'
promptsource/templates/acronym_identification/templates.yaml ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: acronym_identification
2
+ templates:
3
+ 64f438f2-9968-459f-82d2-24bad632b358: !Template
4
+ answer_choices: null
5
+ id: 64f438f2-9968-459f-82d2-24bad632b358
6
+ jinja: "{% set random_abbr = '' %}\n{% set _dummy = none %}\n{% set abbr_exp_dict\
7
+ \ = namespace(value = {}) %}\n{% set abbr_string=namespace(value='') %}\n{%\
8
+ \ set exp_string=namespace(value='')%}\n \n{% for label_idx in range(labels|length)\
9
+ \ %}\n {% if labels[label_idx] == 0 %}{# Long Beginning #}\n {% set exp_string.value\
10
+ \ = tokens[label_idx] %}{# Create new long string #}\n {% elif labels[label_idx]\
11
+ \ == 1 %}{# Short Beginning #}\n {% if abbr_string.value!='' and abbr_string.value\
12
+ \ not in abbr_exp_dict.value.keys()%}{# Some string already present #}\n \
13
+ \ {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:''}) %}{#\
14
+ \ Discard this string as a new short string is coming #}\n {% endif %}\n\
15
+ \ {% set abbr_string.value = tokens[label_idx] %}{# Create new short string\
16
+ \ #}\n {% elif labels[label_idx] == 2 %}{# Long Intermediate #}\n {% set\
17
+ \ exp_string.value = exp_string.value+' '+tokens[label_idx] %}{# Update existing\
18
+ \ string #}\n {% elif labels[label_idx] == 3 %}{# Short Intermediate #}\n \
19
+ \ {% set abbr_string.value = abbr_string.value+tokens[label_idx] %}{# Update\
20
+ \ existing string #}\n {% else %}{# Other #}\n {# Both non-empty, and first\
21
+ \ characters match #}\n {% if abbr_string.value!='' and exp_string.value!=''\
22
+ \ and exp_string.value.split()[0][0]|lower in abbr_string.value|lower and exp_string.value.split()[-1][0]|lower\
23
+ \ in abbr_string.value|lower%}\n {# Update both the dictionaries #}\n \
24
+ \ {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:exp_string.value})\
25
+ \ %}\n {# Empty both the strings #}\n {% set abbr_string.value= ''\
26
+ \ %}\n {% set exp_string.value= '' %}\n {% endif %}\n {% endif %}\n\
27
+ {% endfor %}\n{# Both non-empty, and first characters match #}\n{% if abbr_string.value!=''\
28
+ \ and exp_string.value!='' %}\n {% if exp_string.value.split()[0][0]|lower\
29
+ \ in abbr_string.value|lower and exp_string.value.split()[-1][0]|lower in abbr_string.value|lower\
30
+ \ %}\n {# Update both the dictionaries #}\n {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:exp_string.value})\
31
+ \ %}\n {% elif abbr_exp_dict.value.items()|length==0 %}\n {% set _dummy\
32
+ \ = abbr_exp_dict.value.update({abbr_string.value:exp_string.value}) %}\n {%\
33
+ \ endif %}\n{% else %}\n {% if abbr_string.value!=''%}\n {% if abbr_string.value\
34
+ \ not in abbr_exp_dict.value.keys() %}\n {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:''})\
35
+ \ %}\n {% endif %}\n {% endif %}\n{% endif %}\n{% if abbr_exp_dict.value\
36
+ \ %}\n{% set random_abbr = abbr_exp_dict.value.keys()|list|choice %}\nGiven\
37
+ \ the tokens below, find the expansion (acronym meaning) of \"{{random_abbr}}\"\
38
+ . Return {{\"\\\"Unclear\\\"\"}} if the expansion can't be found.\n \nTokens:\
39
+ \ {{tokens|join(' ')}}\nExpansion: |||\n{% if random_abbr in abbr_exp_dict.value.keys()\
40
+ \ and abbr_exp_dict.value[random_abbr]!='' %}\n{{abbr_exp_dict.value[random_abbr]}}\n\
41
+ {% else %}\nUnclear\n{% endif %}\n{% endif %}"
42
+ metadata: !TemplateMetadata
43
+ choices_in_prompt: false
44
+ languages:
45
+ - en
46
+ metrics:
47
+ - Other
48
+ original_task: true
49
+ name: find_acronym_meaning
50
+ reference: 'Given the tokens, find the expansion of an abbreviation in the tokens.
51
+ Metrics: Precision, Recall, F1'
52
+ 81babc83-18cd-4eed-a343-8ede56b21df5: !Template
53
+ answer_choices: null
54
+ id: 81babc83-18cd-4eed-a343-8ede56b21df5
55
+ jinja: "Specification for BIO tags: \"{{\"B-short\"}}\" and \"{{\"I-short\"}}\"\
56
+ \ represent respectively the beginning and intermediate tokens for abbreviations\
57
+ \ (acronyms).\"{{\"B-long\"}}\" and \"{{\"I-long\"}}\" represent respectively\
58
+ \ the beginning and intermediate tokens for expansions of abbreviations (acronyms\
59
+ \ meaning). All other tokens are represented by \"{{\"O\"}}\". \n\nGiven the\
60
+ \ space-separated tokens below, write down for each token the corresponding\
61
+ \ BIO tag. Use a space to separate tags in the answer.\n\nTokens: {{tokens|join('\
62
+ \ ')}}\nBIO tags:|||{% for label in labels %}{{[\"B-long\", \"B-short\", \"\
63
+ I-long\", \"I-short\", \"O\"][label]}}{% if not loop.last %} {%endif %}{% endfor\
64
+ \ %}"
65
+ metadata: !TemplateMetadata
66
+ choices_in_prompt: false
67
+ languages:
68
+ - en
69
+ metrics:
70
+ - Other
71
+ original_task: true
72
+ name: acronyms_and_expansions_bio_encode
73
+ reference: 'Given the comma separated tokens, generate BIO encoding for abbreviations.
74
+ Metrics: Precision, Recall, F1'
75
+ 8832e5f7-7c45-46da-b85f-71fcb444f264: !Template
76
+ answer_choices: null
77
+ id: 8832e5f7-7c45-46da-b85f-71fcb444f264
78
+ jinja: 'List all the expansions (meanings) of the acronyms present in the following
79
+ space-separated tokens. Return {{"\"No expansions found\""}} if the expansions
80
+ can''t be found.
81
+
82
+
83
+ Tokens: {{tokens|join('' '')}}
84
+
85
+ |||
86
+
87
+ {% set abbr_string=namespace(value='''') %}
88
+
89
+ {% set answer_list=namespace(value=[]) %}
90
+
91
+ {% for label_idx in range(labels|length) %}
92
+
93
+ {% if labels[label_idx] == 0 %}
94
+
95
+ {% set abbr_string.value = tokens[label_idx] %}
96
+
97
+ {% elif abbr_string.value!='''' and labels[label_idx]==2%}
98
+
99
+ {% set abbr_string.value = abbr_string.value+'' ''+tokens[label_idx] %}
100
+
101
+ {% elif abbr_string.value!='''' and labels[label_idx]!=2%}
102
+
103
+ {% set answer_list.value = answer_list.value +[abbr_string.value] %}
104
+
105
+ {% set abbr_string.value = '''' %}
106
+
107
+ {% endif %}
108
+
109
+ {% if loop.last and abbr_string.value!='''' %}
110
+
111
+ {% set answer_list.value = answer_list.value +[abbr_string.value] %}
112
+
113
+ {% endif %}
114
+
115
+ {% endfor %}
116
+
117
+ {% if answer_list.value|length!=0 %}
118
+
119
+ {{ answer_list.value|join('', '') }}
120
+
121
+ {% else %}
122
+
123
+ No expansions found
124
+
125
+ {% endif %}'
126
+ metadata: !TemplateMetadata
127
+ choices_in_prompt: false
128
+ languages:
129
+ - en
130
+ metrics:
131
+ - Other
132
+ original_task: true
133
+ name: list_expansions
134
+ reference: 'Given the tokens, list the expansion tokens. Metrics: Precision, Recall,
135
+ F1'
136
+ cae58242-cde9-472d-ae9e-56fc7e79c0d1: !Template
137
+ answer_choices: null
138
+ id: cae58242-cde9-472d-ae9e-56fc7e79c0d1
139
+ jinja: "List all the acryonyms in the following space-separated tokens: \n\n{{tokens|join('\
140
+ \ ')}}\n|||\n{% set abbr_string=namespace(value='') %}\n{% set answer_list=namespace(value=[])\
141
+ \ %}\n{% for label_idx in range(labels|length) %}\n{% if labels[label_idx] ==\
142
+ \ 1 %}\n{% set abbr_string.value = tokens[label_idx] %}\n{% elif abbr_string.value!=''\
143
+ \ and labels[label_idx]==3%}\n{% set abbr_string.value = abbr_string.value+tokens[label_idx]\
144
+ \ %}\n{% elif abbr_string.value!='' and labels[label_idx]!=3 %}\n{% set answer_list.value\
145
+ \ = answer_list.value +[abbr_string.value] %}\n{% set abbr_string.value = ''\
146
+ \ %}\n{% endif %}\n{% if loop.last and abbr_string.value!='' %}\n{% set answer_list.value\
147
+ \ = answer_list.value +[abbr_string.value] %}\n{% endif %}\n{% endfor %}\n{{\
148
+ \ answer_list.value|join(', ') }}"
149
+ metadata: !TemplateMetadata
150
+ choices_in_prompt: false
151
+ languages:
152
+ - en
153
+ metrics:
154
+ - Other
155
+ original_task: true
156
+ name: list_abbreviations
157
+ reference: 'Given the tokens, list the abbreviations. Metrics: Precision, Recall,
158
+ F1'
159
+ e4e42433-0e37-4aa5-bbce-7f336ecac6a3: !Template
160
+ answer_choices: null
161
+ id: e4e42433-0e37-4aa5-bbce-7f336ecac6a3
162
+ jinja: "{% set _dummy = none %}\n{% set abbr_exp_dict = namespace(value = {})\
163
+ \ %}\n{% set abbr_string=namespace(value='') %}\n{% set exp_string=namespace(value='')%}\n\
164
+ \ \n{% for label_idx in range(labels|length) %}\n {% if labels[label_idx] ==\
165
+ \ 0 %}{# Long Beginning #}\n {% set exp_string.value = tokens[label_idx]\
166
+ \ %}{# Create new long string #}\n {% elif labels[label_idx] == 1 %}{# Short\
167
+ \ Beginning #}\n {% if abbr_string.value!='' and abbr_string.value not in\
168
+ \ abbr_exp_dict.value.keys()%}{# Some string already present #}\n {% set\
169
+ \ _dummy = abbr_exp_dict.value.update({abbr_string.value:''}) %}{# Discard this\
170
+ \ string as a new short string is coming #}\n {% endif %}\n {% set abbr_string.value\
171
+ \ = tokens[label_idx] %}{# Create new short string #}\n {% elif labels[label_idx]\
172
+ \ == 2 %}{# Long Intermediate #}\n {% set exp_string.value = exp_string.value+'\
173
+ \ '+tokens[label_idx] %}{# Update existing string #}\n {% elif labels[label_idx]\
174
+ \ == 3 %}{# Short Intermediate #}\n {% set abbr_string.value = abbr_string.value+tokens[label_idx]\
175
+ \ %}{# Update existing string #}\n {% else %}{# Other #}\n {# Both non-empty,\
176
+ \ and first characters match #}\n {% if abbr_string.value!='' and exp_string.value!=''\
177
+ \ and exp_string.value.split()[0][0]|lower in abbr_string.value|lower and exp_string.value.split()[-1][0]|lower\
178
+ \ in abbr_string.value|lower%}\n {# Update both the dictionaries #}\n \
179
+ \ {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:exp_string.value})\
180
+ \ %}\n {# Empty both the strings #}\n {% set abbr_string.value= ''\
181
+ \ %}\n {% set exp_string.value= '' %}\n {% endif %}\n {% endif %}\n\
182
+ {% endfor %}\n{# Both non-empty, and first characters match #}\n{% if abbr_string.value!=''\
183
+ \ and exp_string.value!='' %}\n {% if exp_string.value.split()[0][0]|lower\
184
+ \ in abbr_string.value|lower and exp_string.value.split()[-1][0]|lower in abbr_string.value|lower\
185
+ \ %}\n {# Update both the dictionaries #}\n {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:exp_string.value})\
186
+ \ %}\n {% elif abbr_exp_dict.value.items()|length==0 %}\n {% set _dummy\
187
+ \ = abbr_exp_dict.value.update({abbr_string.value:exp_string.value}) %}\n {%\
188
+ \ endif %}\n{% else %}\n {% if abbr_string.value!=''%}\n {% if abbr_string.value\
189
+ \ not in abbr_exp_dict.value.keys() %}\n {% set _dummy = abbr_exp_dict.value.update({abbr_string.value:''})\
190
+ \ %}\n {% endif %}\n {% endif %}\n{% endif %}\n \nGiven the following tokens,\
191
+ \ find the abbreviations (acronyms) and their expansions (acronyms meaning).\
192
+ \ Return {{\"\\\"Unclear\\\"\"}} if the expansion can't be found.\n \nTokens:\
193
+ \ {{tokens|join(' ')}}\n|||\n{% for item, value in abbr_exp_dict.value.items()\
194
+ \ %}\n{{item}} : {% if value!='' %}{{value}}{% else %}Unclear{% endif %}\n{%endfor%}"
195
+ metadata: !TemplateMetadata
196
+ choices_in_prompt: false
197
+ languages:
198
+ - en
199
+ metrics:
200
+ - Other
201
+ original_task: true
202
+ name: find_acronyms_and_expansions
203
+ reference: 'Given the tokens, find the abbreviation mapping. Metrics: Precision,
204
+ Recall, F1'
205
+ eed32ee4-ebc3-499f-ba61-e91461f56ccb: !Template
206
+ answer_choices: null
207
+ id: eed32ee4-ebc3-499f-ba61-e91461f56ccb
208
+ jinja: "{% set random_exp = '' %}{% set _dummy = none %}{% set exp_abbr_dict =\
209
+ \ namespace(value = {}) %}{% set abbr_string=namespace(value='') %}{% set exp_string=namespace(value='')%}{%\
210
+ \ for label_idx in range(labels|length) %}{% if labels[label_idx] == 0 %}{#\
211
+ \ Long Beginning #}{% if exp_string.value!='' and exp_string.value not in exp_abbr_dict.value.keys()\
212
+ \ %}{# Some string already present #}{% set _dummy = exp_abbr_dict.value.update({exp_string.value:''})\
213
+ \ %}{# Discard this string as a new long string is coming #} {% endif %}{% set\
214
+ \ exp_string.value = tokens[label_idx] %}{# Create new long string #}{% elif\
215
+ \ labels[label_idx] == 1 %}{# Short Beginning #}{% set abbr_string.value = tokens[label_idx]\
216
+ \ %}{# Create new short string #}{% elif labels[label_idx] == 2 %}{# Long Intermediate\
217
+ \ #}{% set exp_string.value = exp_string.value+' '+tokens[label_idx] %}{# Update\
218
+ \ existing string #}{% elif labels[label_idx] == 3 %}{# Short Intermediate #}{%\
219
+ \ set abbr_string.value = abbr_string.value+tokens[label_idx] %}{# Update existing\
220
+ \ string #}{% else %}{# Other #}{# Both non-empty, and first characters match\
221
+ \ #}{% if abbr_string.value!='' and exp_string.value!='' and exp_string.value.split()[0][0]|lower\
222
+ \ in abbr_string.value|lower and exp_string.value.split()[-1][0]|lower in abbr_string.value|lower%}{#\
223
+ \ Update both the dictionaries #}{% set _dummy = exp_abbr_dict.value.update({exp_string.value:abbr_string.value})\
224
+ \ %}{# Empty both the strings #}{% set abbr_string.value= '' %}{% set exp_string.value=\
225
+ \ '' %}{% endif %}{% endif %}{% endfor %}{# Both non-empty, and first characters\
226
+ \ match #}{% if abbr_string.value!='' and exp_string.value!='' %}{% if exp_string.value.split()[0][0]|lower\
227
+ \ in abbr_string.value|lower and exp_string.value.split()[-1][0]|lower in abbr_string.value|lower\
228
+ \ %}{# Update the dictionary #}{% set _dummy = exp_abbr_dict.value.update({exp_string.value:abbr_string.value})\
229
+ \ %}{% elif exp_abbr_dict.value.items()|length==0 %}{% set _dummy = exp_abbr_dict.value.update({exp_string.value:abbr_string.value})\
230
+ \ %}{% endif %}{% else %}{% if exp_string.value!='' %}{% if exp_string.value\
231
+ \ not in exp_abbr_dict.value.keys() %}{% set _dummy = exp_abbr_dict.value.update({exp_string.value:''})\
232
+ \ %}{% endif %}{% endif %}{% endif %}{% if exp_abbr_dict.value.items()|length!=0\
233
+ \ %}{% set random_exp = exp_abbr_dict.value.keys()|list|choice %}Given the tokens\
234
+ \ below, find the abbreviation (acronym) for: \"{{random_exp}}\". Return {{\"\
235
+ \\\"Unclear\\\"\"}} if the abbreviation can't be found.\n \nTokens: {{tokens|join('\
236
+ \ ')}}\nAcronyms: |||{% if random_exp in exp_abbr_dict.value.keys() and exp_abbr_dict.value[random_exp]!=''\
237
+ \ %}{{exp_abbr_dict.value[random_exp]}}{% else %}Unclear{% endif %}{% endif\
238
+ \ %}"
239
+ metadata: !TemplateMetadata
240
+ choices_in_prompt: false
241
+ languages:
242
+ - en
243
+ metrics:
244
+ - Other
245
+ original_task: true
246
+ name: find_acronym
247
+ reference: 'Given the tokens, find the abbreviation for an expansion. Metrics:
248
+ Precision, Recall, F1'
promptsource/templates/ade_corpus_v2/Ade_corpus_v2_classification/templates.yaml ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ade_corpus_v2
2
+ subset: Ade_corpus_v2_classification
3
+ templates:
4
+ 56bd12a8-b8ee-464e-98cc-5f586ba9f74d: !Template
5
+ answer_choices: No ||| Yes
6
+ id: 56bd12a8-b8ee-464e-98cc-5f586ba9f74d
7
+ jinja: 'Please answer the below Yes / No question.
8
+
9
+
10
+ Is "{{text}}" related to adverse drug effect (ADE)? ||| {{answer_choices[label]}}'
11
+ metadata: !TemplateMetadata
12
+ choices_in_prompt: true
13
+ languages:
14
+ - en
15
+ metrics:
16
+ - Accuracy
17
+ original_task: true
18
+ name: binary-classification
19
+ reference: ''
20
+ 78c4ce65-dd66-46ed-878d-11f4eca5e544: !Template
21
+ answer_choices: No ||| Yes
22
+ id: 78c4ce65-dd66-46ed-878d-11f4eca5e544
23
+ jinja: "Read the below text and answer the question.\n\nText: {{text}} \n\nQuestion:\
24
+ \ Is the above text related to adverse drug effect (ADE)? Your answer should\
25
+ \ be either \"Yes\" or \"No\".\n\n|||\n{{answer_choices[label]}}"
26
+ metadata: !TemplateMetadata
27
+ choices_in_prompt: true
28
+ languages:
29
+ - en
30
+ metrics:
31
+ - Accuracy
32
+ original_task: true
33
+ name: verbose-binary-classification
34
+ reference: ''
35
+ dabc0337-5bd3-4150-98b3-794a15ce1a3a: !Template
36
+ answer_choices: null
37
+ id: dabc0337-5bd3-4150-98b3-794a15ce1a3a
38
+ jinja: "{% if label==1 %}\nPlease write a short medical report that is related\
39
+ \ to adverse drug effect (ADE). \n{% else %}\nWrite a medical report that is\
40
+ \ not related to adverse drug effect (ADE). \n{% endif %}\n|||\n{{text}}"
41
+ metadata: !TemplateMetadata
42
+ choices_in_prompt: false
43
+ languages:
44
+ - en
45
+ metrics:
46
+ - BLEU
47
+ - ROUGE
48
+ original_task: false
49
+ name: label-to-text
50
+ reference: ''
promptsource/templates/ade_corpus_v2/Ade_corpus_v2_drug_ade_relation/templates.yaml ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ade_corpus_v2
2
+ subset: Ade_corpus_v2_drug_ade_relation
3
+ templates:
4
+ 0ec35408-652d-4ebc-9478-5a0d330c24c8: !Template
5
+ answer_choices: null
6
+ id: 0ec35408-652d-4ebc-9478-5a0d330c24c8
7
+ jinja: 'Read the below text and answer the question.
8
+
9
+
10
+ Text: {{text}}
11
+
12
+
13
+ Question: What drug has an effect of {{effect}}?
14
+
15
+ |||
16
+
17
+ {{drug}}'
18
+ metadata: !TemplateMetadata
19
+ choices_in_prompt: false
20
+ languages:
21
+ - en
22
+ metrics:
23
+ - Accuracy
24
+ original_task: true
25
+ name: find-drug
26
+ reference: ''
27
+ 2682a789-a435-4976-b34f-f376991c842a: !Template
28
+ answer_choices: null
29
+ id: 2682a789-a435-4976-b34f-f376991c842a
30
+ jinja: '{{drug}} has an effect of {{effect}}. Please write a short medical report
31
+ about this.
32
+
33
+ |||
34
+
35
+ {{text}}'
36
+ metadata: !TemplateMetadata
37
+ choices_in_prompt: false
38
+ languages:
39
+ - en
40
+ metrics:
41
+ - ROUGE
42
+ - BLEU
43
+ original_task: false
44
+ name: drug-and-effect-to-text
45
+ reference: ''
46
+ 61ba3622-72bc-4fd8-acfc-826bc2a93aa5: !Template
47
+ answer_choices: null
48
+ id: 61ba3622-72bc-4fd8-acfc-826bc2a93aa5
49
+ jinja: 'Read the below text and answer the question.
50
+
51
+
52
+ Text: {{text}}
53
+
54
+
55
+ Question: What effect does {{drug}} have?
56
+
57
+ |||
58
+
59
+ {{effect}}'
60
+ metadata: !TemplateMetadata
61
+ choices_in_prompt: false
62
+ languages:
63
+ - en
64
+ metrics:
65
+ - Accuracy
66
+ original_task: true
67
+ name: find-effect
68
+ reference: ''
69
+ 6acf3588-baa1-4ff6-87c4-4c2356855464: !Template
70
+ answer_choices: null
71
+ id: 6acf3588-baa1-4ff6-87c4-4c2356855464
72
+ jinja: 'Read the below text and answer the question.
73
+
74
+
75
+ Text: {{text}}
76
+
77
+
78
+ Question: What are the drug and its effect of the above text?
79
+
80
+
81
+ You should answer in the "drug" and "effect" format (e.g., alcohol and high
82
+ blood pressure)
83
+
84
+ |||
85
+
86
+ {{drug}} and {{effect}}.'
87
+ metadata: !TemplateMetadata
88
+ choices_in_prompt: false
89
+ languages:
90
+ - en
91
+ metrics:
92
+ - Accuracy
93
+ original_task: true
94
+ name: find-drug-and-effect
95
+ reference: ''
96
+ db68e609-ba92-40ae-b161-8b7710124142: !Template
97
+ answer_choices: null
98
+ id: db68e609-ba92-40ae-b161-8b7710124142
99
+ jinja: 'Read the below text and answer the two following questions.
100
+
101
+
102
+ Text: {{text}}
103
+
104
+
105
+ Question 1: What is the drug in the above text?
106
+
107
+
108
+ Question 2: What is the effect of it?
109
+
110
+
111
+ You should answer in the "drug" and "effect" format (e.g., alcohol and high
112
+ blood pressure)
113
+
114
+ |||
115
+
116
+ {{drug}} and {{effect}}.'
117
+ metadata: !TemplateMetadata
118
+ choices_in_prompt: false
119
+ languages:
120
+ - en
121
+ metrics:
122
+ - Accuracy
123
+ original_task: true
124
+ name: find-drug-and-effect-two-questions
125
+ reference: ''
promptsource/templates/ade_corpus_v2/Ade_corpus_v2_drug_dosage_relation/templates.yaml ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ade_corpus_v2
2
+ subset: Ade_corpus_v2_drug_dosage_relation
3
+ templates:
4
+ 1de6d411-ed0a-4d48-806e-cad009f07a65: !Template
5
+ answer_choices: null
6
+ id: 1de6d411-ed0a-4d48-806e-cad009f07a65
7
+ jinja: 'Read the below text and answer the question.
8
+
9
+
10
+ Text: {{text}}
11
+
12
+
13
+ Question: What drug has a dosage of {{dosage}}?
14
+
15
+ |||
16
+
17
+ {{drug}}'
18
+ metadata: !TemplateMetadata
19
+ choices_in_prompt: false
20
+ languages:
21
+ - en
22
+ metrics:
23
+ - Accuracy
24
+ original_task: true
25
+ name: find-drug
26
+ reference: ''
27
+ 1e719388-59c9-4b0a-9ed9-dd02b6ddd0a6: !Template
28
+ answer_choices: null
29
+ id: 1e719388-59c9-4b0a-9ed9-dd02b6ddd0a6
30
+ jinja: '{{dosage}} of {{drug}} was given to a patient. Please write a short medical
31
+ report about this.
32
+
33
+ |||
34
+
35
+ {{text}}'
36
+ metadata: !TemplateMetadata
37
+ choices_in_prompt: false
38
+ languages:
39
+ - en
40
+ metrics:
41
+ - BLEU
42
+ - ROUGE
43
+ original_task: false
44
+ name: drug-and-dosage-to-text
45
+ reference: ''
46
+ 2bed0f04-8249-4248-86ea-e3a1971b2e1b: !Template
47
+ answer_choices: null
48
+ id: 2bed0f04-8249-4248-86ea-e3a1971b2e1b
49
+ jinja: 'Read the below text and answer the two following questions.
50
+
51
+
52
+ Text: {{text}}
53
+
54
+
55
+
56
+ Question 1: What is the drug in the above text?
57
+
58
+
59
+ Question 2: What is the dosage of it?
60
+
61
+
62
+ You should answer in the "drug" and "dosage" format (e.g., Aspirin and 500mg)
63
+
64
+ |||
65
+
66
+ {{drug}} and {{dosage}}.'
67
+ metadata: !TemplateMetadata
68
+ choices_in_prompt: false
69
+ languages:
70
+ - en
71
+ metrics:
72
+ - Accuracy
73
+ original_task: true
74
+ name: find-drug-and-dosage-two-questions
75
+ reference: ''
76
+ ca175bed-d046-40e7-9dbb-1e50fde7e603: !Template
77
+ answer_choices: null
78
+ id: ca175bed-d046-40e7-9dbb-1e50fde7e603
79
+ jinja: 'Read the below text and answer the question.
80
+
81
+
82
+ Text: {{text}}
83
+
84
+
85
+ Question: What is the dosage of {{drug}}?
86
+
87
+ |||
88
+
89
+ {{dosage}}'
90
+ metadata: !TemplateMetadata
91
+ choices_in_prompt: false
92
+ languages:
93
+ - en
94
+ metrics:
95
+ - Accuracy
96
+ original_task: true
97
+ name: find-dosage
98
+ reference: ''
99
+ ce5208ac-6b4c-4a35-8738-e20232df1917: !Template
100
+ answer_choices: null
101
+ id: ce5208ac-6b4c-4a35-8738-e20232df1917
102
+ jinja: "Read the below text and answer the question.\n\nText: {{text}}\n\nQuestion:\
103
+ \ What are the drug and its dosage of the above text? \n\nYou should answer\
104
+ \ in the \"drug\" and \"dosage\" format (e.g., Aspirin and 500mg)\n|||\n{{drug}}\
105
+ \ and {{dosage}}."
106
+ metadata: !TemplateMetadata
107
+ choices_in_prompt: false
108
+ languages:
109
+ - en
110
+ metrics:
111
+ - Accuracy
112
+ original_task: true
113
+ name: find-drug-and-dosage
114
+ reference: ''
promptsource/templates/adversarial_qa/adversarialQA/templates.yaml ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: adversarial_qa
2
+ subset: adversarialQA
3
+ templates:
4
+ 00755780-f3c0-44b4-b159-8f3873cdb16c: !Template
5
+ answer_choices: null
6
+ id: 00755780-f3c0-44b4-b159-8f3873cdb16c
7
+ jinja: 'I want to test the ability of students to read a passage and answer questions
8
+ about it. Could you please come up with a good question for the passage "{{context}}"?
9
+ |||
10
+
11
+ {{question}}'
12
+ metadata: !TemplateMetadata
13
+ choices_in_prompt: false
14
+ languages:
15
+ - en
16
+ metrics:
17
+ - BLEU
18
+ - ROUGE
19
+ original_task: false
20
+ name: generate_question
21
+ reference: 'Input: Context, Output: Question (generate a question)'
22
+ 3b2459cc-6600-443c-abf8-8f60c34cd998: !Template
23
+ answer_choices: null
24
+ id: 3b2459cc-6600-443c-abf8-8f60c34cd998
25
+ jinja: '{% if metadata.split != "test" %}
26
+
27
+ I know that the answer to the question "{{question}}" is in "{{context}}". Can
28
+ you tell me what it is? |||
29
+
30
+
31
+ {{answers.text | choice}}
32
+
33
+ {% endif %}'
34
+ metadata: !TemplateMetadata
35
+ choices_in_prompt: false
36
+ languages:
37
+ - en
38
+ metrics:
39
+ - Squad
40
+ original_task: true
41
+ name: tell_what_it_is
42
+ reference: 'Input: QC, Output: A (rephrase)'
43
+ 5bdb1815-5c6f-49a3-ad1d-367344420701: !Template
44
+ answer_choices: null
45
+ id: 5bdb1815-5c6f-49a3-ad1d-367344420701
46
+ jinja: '{% if metadata.split != "test" %}
47
+
48
+ Question: "{{question}}"
49
+
50
+
51
+ Context: "{{context}}"
52
+
53
+
54
+ Answer:
55
+
56
+ |||
57
+
58
+ {{answers.text | choice}}
59
+
60
+ {% endif %}'
61
+ metadata: !TemplateMetadata
62
+ choices_in_prompt: false
63
+ languages:
64
+ - en
65
+ metrics:
66
+ - Squad
67
+ original_task: true
68
+ name: question_context_answer
69
+ reference: 'Input: QC, Output: Answer (short form)'
70
+ a0872cde-2f19-4ae6-919a-868da47bfbcb: !Template
71
+ answer_choices: null
72
+ id: a0872cde-2f19-4ae6-919a-868da47bfbcb
73
+ jinja: '{% if metadata.split != "test" %}
74
+
75
+ Extract the answer to the question from the following context.
76
+
77
+ Question: {{question}}
78
+
79
+ Context: {{context}}|||
80
+
81
+ {{answers.text | choice}}
82
+
83
+ {% endif %}'
84
+ metadata: !TemplateMetadata
85
+ choices_in_prompt: false
86
+ languages:
87
+ - en
88
+ metrics:
89
+ - Squad
90
+ original_task: true
91
+ name: based_on
92
+ reference: ''
93
+ a64d5a15-68e2-4d1c-b30a-ca8250c860f9: !Template
94
+ answer_choices: null
95
+ id: a64d5a15-68e2-4d1c-b30a-ca8250c860f9
96
+ jinja: '{% if metadata.split != "test" %}
97
+
98
+ Given the following passage
99
+
100
+
101
+ "{{context}}",
102
+
103
+
104
+ answer the following question. Note that the answer is present within the text.
105
+
106
+
107
+ Question: {{question}} |||
108
+
109
+ {{answers.text | choice}}
110
+
111
+ {% endif %}'
112
+ metadata: !TemplateMetadata
113
+ choices_in_prompt: false
114
+ languages:
115
+ - en
116
+ metrics:
117
+ - Squad
118
+ original_task: true
119
+ name: answer_the_following_q
120
+ reference: 'Input: QC, Output: Answer'
promptsource/templates/adversarial_qa/dbert/templates.yaml ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: adversarial_qa
2
+ subset: dbert
3
+ templates:
4
+ 00755780-f3c0-44b4-b159-8f3873cdb16a: !Template
5
+ answer_choices: null
6
+ id: 00755780-f3c0-44b4-b159-8f3873cdb16a
7
+ jinja: 'I want to test the ability of students to read a passage and answer questions
8
+ about it. Could you please come up with a good question for the passage "{{context}}"?
9
+ |||
10
+
11
+ {{question}}'
12
+ metadata: !TemplateMetadata
13
+ choices_in_prompt: false
14
+ languages:
15
+ - en
16
+ metrics:
17
+ - BLEU
18
+ - ROUGE
19
+ original_task: false
20
+ name: generate_question
21
+ reference: 'Input: Context, Output: Question (generate a question)'
22
+ 3b2459cc-6600-443c-abf8-8f60c34cd99a: !Template
23
+ answer_choices: null
24
+ id: 3b2459cc-6600-443c-abf8-8f60c34cd99a
25
+ jinja: '{% if metadata.split != "test" %}
26
+
27
+ I know that the answer to the question "{{question}}" is in "{{context}}". Can
28
+ you tell me what it is? |||
29
+
30
+
31
+ {{answers.text | choice}}
32
+
33
+ {% endif %}'
34
+ metadata: !TemplateMetadata
35
+ choices_in_prompt: false
36
+ languages:
37
+ - en
38
+ metrics:
39
+ - Squad
40
+ original_task: true
41
+ name: tell_what_it_is
42
+ reference: 'Input: QC, Output: A (rephrase)'
43
+ 5bdb1815-5c6f-49a3-ad1d-36734442070a: !Template
44
+ answer_choices: null
45
+ id: 5bdb1815-5c6f-49a3-ad1d-36734442070a
46
+ jinja: '{% if metadata.split != "test" %}
47
+
48
+ Question: "{{question}}"
49
+
50
+
51
+ Context: "{{context}}"
52
+
53
+
54
+ Answer:
55
+
56
+ |||
57
+
58
+ {{answers.text | choice}}
59
+
60
+ {% endif %}'
61
+ metadata: !TemplateMetadata
62
+ choices_in_prompt: false
63
+ languages:
64
+ - en
65
+ metrics:
66
+ - Squad
67
+ original_task: true
68
+ name: question_context_answer
69
+ reference: 'Input: QC, Output: Answer (short form)'
70
+ a0872cde-2f19-4ae6-919a-868da47bfbca: !Template
71
+ answer_choices: null
72
+ id: a0872cde-2f19-4ae6-919a-868da47bfbca
73
+ jinja: '{% if metadata.split != "test" %}
74
+
75
+ Extract the answer to the question from the following context.
76
+
77
+ Question: {{question}}
78
+
79
+ Context: {{context}}|||
80
+
81
+ {{answers.text | choice}}
82
+
83
+ {% endif %}'
84
+ metadata: !TemplateMetadata
85
+ choices_in_prompt: false
86
+ languages:
87
+ - en
88
+ metrics:
89
+ - Squad
90
+ original_task: true
91
+ name: based_on
92
+ reference: ''
93
+ a64d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
94
+ answer_choices: null
95
+ id: a64d5a15-68e2-4d1c-b30a-ca8250c860fa
96
+ jinja: '{% if metadata.split != "test" %}
97
+
98
+ Given the following passage
99
+
100
+
101
+ "{{context}}",
102
+
103
+
104
+ answer the following question. Note that the answer is present within the text.
105
+
106
+
107
+ Question: {{question}} |||
108
+
109
+ {{answers.text | choice}}
110
+
111
+ {% endif %}'
112
+ metadata: !TemplateMetadata
113
+ choices_in_prompt: false
114
+ languages:
115
+ - en
116
+ metrics:
117
+ - Squad
118
+ original_task: true
119
+ name: answer_the_following_q
120
+ reference: 'Input: QC, Output: Answer'
promptsource/templates/adversarial_qa/dbidaf/templates.yaml ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: adversarial_qa
2
+ subset: dbidaf
3
+ templates:
4
+ 41f28b31-d0fc-4f20-a0a2-ff21813e298e: !Template
5
+ answer_choices: null
6
+ id: 41f28b31-d0fc-4f20-a0a2-ff21813e298e
7
+ jinja: '{% if metadata.split != "test" %}
8
+
9
+ Extract the answer to the question from the following context.
10
+
11
+ Question: {{question}}
12
+
13
+ Context: {{context}}|||
14
+
15
+ {{answers.text | choice}}
16
+
17
+ {% endif %}'
18
+ metadata: !TemplateMetadata
19
+ choices_in_prompt: false
20
+ languages:
21
+ - en
22
+ metrics:
23
+ - Squad
24
+ original_task: true
25
+ name: based_on
26
+ reference: ''
27
+ a64d5a15-68e2-4d1c-b30a-ca8250c860d9: !Template
28
+ answer_choices: null
29
+ id: a64d5a15-68e2-4d1c-b30a-ca8250c860d9
30
+ jinja: '{% if metadata.split != "test" %}
31
+
32
+ Given the following passage
33
+
34
+
35
+ "{{context}}",
36
+
37
+
38
+ answer the following question. Note that the answer is present within the text.
39
+
40
+
41
+ Question: {{question}} |||
42
+
43
+ {{answers.text | choice}}
44
+
45
+ {% endif %}'
46
+ metadata: !TemplateMetadata
47
+ choices_in_prompt: false
48
+ languages:
49
+ - en
50
+ metrics:
51
+ - Squad
52
+ original_task: true
53
+ name: answer_the_following_q
54
+ reference: 'Input: QC, Output: Answer'
55
+ c7a80603-d610-4999-98a7-815b2f84592d: !Template
56
+ answer_choices: null
57
+ id: c7a80603-d610-4999-98a7-815b2f84592d
58
+ jinja: 'I want to test the ability of students to read a passage and answer questions
59
+ about it. Could you please come up with a good question for the passage "{{context}}"?
60
+ |||
61
+
62
+ {{question}}'
63
+ metadata: !TemplateMetadata
64
+ choices_in_prompt: false
65
+ languages:
66
+ - en
67
+ metrics:
68
+ - BLEU
69
+ - ROUGE
70
+ original_task: false
71
+ name: generate_question
72
+ reference: 'Input: Context, Output: Question (generate a question)'
73
+ ce9bc00a-567b-4c4e-aad7-df6f5d5d57bb: !Template
74
+ answer_choices: null
75
+ id: ce9bc00a-567b-4c4e-aad7-df6f5d5d57bb
76
+ jinja: '{% if metadata.split != "test" %}
77
+
78
+ I know that the answer to the question "{{question}}" is in "{{context}}". Can
79
+ you tell me what it is? |||
80
+
81
+
82
+ {{answers.text | choice}}
83
+
84
+ {% endif %}'
85
+ metadata: !TemplateMetadata
86
+ choices_in_prompt: false
87
+ languages:
88
+ - en
89
+ metrics:
90
+ - Squad
91
+ original_task: true
92
+ name: tell_what_it_is
93
+ reference: 'Input: QC, Output: A (rephrase)'
94
+ fa185424-6ebe-49b8-b4ed-7632ca33c361: !Template
95
+ answer_choices: null
96
+ id: fa185424-6ebe-49b8-b4ed-7632ca33c361
97
+ jinja: '{% if metadata.split != "test" %}
98
+
99
+ Question: "{{question}}"
100
+
101
+
102
+ Context: "{{context}}"
103
+
104
+
105
+ Answer:
106
+
107
+ |||
108
+
109
+ {{answers.text | choice}}
110
+
111
+ {% endif %}'
112
+ metadata: !TemplateMetadata
113
+ choices_in_prompt: false
114
+ languages:
115
+ - en
116
+ metrics:
117
+ - Squad
118
+ original_task: true
119
+ name: question_context_answer
120
+ reference: 'Input: QC, Output: Answer (short form)'
promptsource/templates/adversarial_qa/droberta/templates.yaml ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: adversarial_qa
2
+ subset: droberta
3
+ templates:
4
+ 00755780-f3c0-44b4-b159-8f3873cdb163: !Template
5
+ answer_choices: null
6
+ id: 00755780-f3c0-44b4-b159-8f3873cdb163
7
+ jinja: 'I want to test the ability of students to read a passage and answer questions
8
+ about it. Could you please come up with a good question for the passage "{{context}}"?
9
+ |||
10
+
11
+ {{question}}'
12
+ metadata: !TemplateMetadata
13
+ choices_in_prompt: false
14
+ languages:
15
+ - en
16
+ metrics:
17
+ - BLEU
18
+ - ROUGE
19
+ original_task: false
20
+ name: generate_question
21
+ reference: 'Input: Context, Output: Question (generate a question)'
22
+ 3b2459cc-6600-443c-abf8-8f60c34cd993: !Template
23
+ answer_choices: null
24
+ id: 3b2459cc-6600-443c-abf8-8f60c34cd993
25
+ jinja: '{% if metadata.split != "test" %}
26
+
27
+ I know that the answer to the question "{{question}}" is in "{{context}}". Can
28
+ you tell me what it is? |||
29
+
30
+
31
+ {{answers.text | choice}}
32
+
33
+ {% endif %}'
34
+ metadata: !TemplateMetadata
35
+ choices_in_prompt: false
36
+ languages:
37
+ - en
38
+ metrics:
39
+ - Squad
40
+ original_task: true
41
+ name: tell_what_it_is
42
+ reference: 'Input: QC, Output: A (rephrase)'
43
+ 5bdb1815-5c6f-49a3-ad1d-367344420703: !Template
44
+ answer_choices: null
45
+ id: 5bdb1815-5c6f-49a3-ad1d-367344420703
46
+ jinja: '{% if metadata.split != "test" %}
47
+
48
+ Question: "{{question}}"
49
+
50
+
51
+ Context: "{{context}}"
52
+
53
+
54
+ Answer:
55
+
56
+ |||
57
+
58
+ {{answers.text | choice}}
59
+
60
+ {% endif %}'
61
+ metadata: !TemplateMetadata
62
+ choices_in_prompt: false
63
+ languages:
64
+ - en
65
+ metrics:
66
+ - Squad
67
+ original_task: true
68
+ name: question_context_answer
69
+ reference: 'Input: QC, Output: Answer (short form)'
70
+ a0872cde-2f19-4ae6-919a-868da47bfbc3: !Template
71
+ answer_choices: null
72
+ id: a0872cde-2f19-4ae6-919a-868da47bfbc3
73
+ jinja: '{% if metadata.split != "test" %}
74
+
75
+ Extract the answer to the question from the following context.
76
+
77
+ Question: {{question}}
78
+
79
+ Context: {{context}}|||
80
+
81
+ {{answers.text | choice}}
82
+
83
+ {% endif %}'
84
+ metadata: !TemplateMetadata
85
+ choices_in_prompt: false
86
+ languages:
87
+ - en
88
+ metrics:
89
+ - Squad
90
+ original_task: true
91
+ name: based_on
92
+ reference: ''
93
+ a64d5a15-68e2-4d1c-b30a-ca8250c860f3: !Template
94
+ answer_choices: null
95
+ id: a64d5a15-68e2-4d1c-b30a-ca8250c860f3
96
+ jinja: '{% if metadata.split != "test" %}
97
+
98
+ Given the following passage
99
+
100
+
101
+ "{{context}}",
102
+
103
+
104
+ answer the following question. Note that the answer is present within the text.
105
+
106
+
107
+ Question: {{question}} |||
108
+
109
+ {{answers.text | choice}}
110
+
111
+ {% endif %}'
112
+ metadata: !TemplateMetadata
113
+ choices_in_prompt: false
114
+ languages:
115
+ - en
116
+ metrics:
117
+ - Squad
118
+ original_task: true
119
+ name: answer_the_following_q
120
+ reference: 'Input: QC, Output: Answer'
promptsource/templates/aeslc/templates.yaml ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: aeslc
2
+ templates:
3
+ 0bef38b8-6d0b-440b-8a3d-db034aaf5a15: !Template
4
+ answer_choices: null
5
+ id: 0bef38b8-6d0b-440b-8a3d-db034aaf5a15
6
+ jinja: '{{ email_body }}
7
+
8
+
9
+ What is this email about? |||
10
+
11
+
12
+ {{ subject_line }}'
13
+ metadata: !TemplateMetadata
14
+ choices_in_prompt: false
15
+ languages:
16
+ - en
17
+ metrics:
18
+ - ROUGE
19
+ - Other
20
+ original_task: true
21
+ name: what_is_this_email_about
22
+ reference: Ask a question from a context
23
+ 11de8b2c-8016-4b98-b5f2-c1a7e5c0e433: !Template
24
+ answer_choices: null
25
+ id: 11de8b2c-8016-4b98-b5f2-c1a7e5c0e433
26
+ jinja: 'What is the subject of this email:
27
+
28
+
29
+ {{ email_body }} |||
30
+
31
+
32
+ {{ subject_line }}'
33
+ metadata: !TemplateMetadata
34
+ choices_in_prompt: false
35
+ languages:
36
+ - en
37
+ metrics:
38
+ - ROUGE
39
+ - Other
40
+ original_task: true
41
+ name: what_is_the_subject_of_this_email
42
+ reference: Ask a question from a context
43
+ 12616e45-1d61-4924-8ce4-fe3efd061e7a: !Template
44
+ answer_choices: null
45
+ id: 12616e45-1d61-4924-8ce4-fe3efd061e7a
46
+ jinja: 'The text below is the content of an email. What is the topic of this email?
47
+
48
+
49
+ {{ email_body }} |||
50
+
51
+
52
+ {{ subject_line }}'
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: false
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - ROUGE
59
+ - Other
60
+ original_task: true
61
+ name: the_text_below
62
+ reference: ''
63
+ 25179c66-5638-4de5-bdce-d6dccec64c65: !Template
64
+ answer_choices: null
65
+ id: 25179c66-5638-4de5-bdce-d6dccec64c65
66
+ jinja: 'Generate a subject line for the email body below:
67
+
68
+
69
+ {{ email_body }} |||
70
+
71
+
72
+ {{ subject_line }}'
73
+ metadata: !TemplateMetadata
74
+ choices_in_prompt: false
75
+ languages:
76
+ - en
77
+ metrics:
78
+ - ROUGE
79
+ - Other
80
+ original_task: true
81
+ name: generate_subject_line
82
+ reference: Instruct to generate
83
+ 8917d7f0-5f72-418f-a2d9-98d4a8da13b0: !Template
84
+ answer_choices: null
85
+ id: 8917d7f0-5f72-418f-a2d9-98d4a8da13b0
86
+ jinja: 'What is this email about:
87
+
88
+
89
+ {{ email_body }} |||
90
+
91
+
92
+ {{ subject_line }}'
93
+ metadata: !TemplateMetadata
94
+ choices_in_prompt: false
95
+ languages:
96
+ - en
97
+ metrics:
98
+ - ROUGE
99
+ - Other
100
+ original_task: true
101
+ name: what_about
102
+ reference: Ask a question from a context
103
+ d1c5da3f-f1e4-4891-abcb-79463b30a616: !Template
104
+ answer_choices: null
105
+ id: d1c5da3f-f1e4-4891-abcb-79463b30a616
106
+ jinja: '{{ email_body }}
107
+
108
+
109
+ What is the subject of this email? |||
110
+
111
+
112
+ {{ subject_line }}'
113
+ metadata: !TemplateMetadata
114
+ choices_in_prompt: false
115
+ languages:
116
+ - en
117
+ metrics:
118
+ - ROUGE
119
+ - Other
120
+ original_task: true
121
+ name: what_subject_of_email
122
+ reference: Ask a question from a context
123
+ d9dd8e72-acb4-4aad-aeb7-a877bacbb402: !Template
124
+ answer_choices: null
125
+ id: d9dd8e72-acb4-4aad-aeb7-a877bacbb402
126
+ jinja: '{{ email_body }}
127
+
128
+
129
+ Generate a subject line for the email body above. |||
130
+
131
+
132
+ {{ subject_line }}'
133
+ metadata: !TemplateMetadata
134
+ choices_in_prompt: false
135
+ languages:
136
+ - en
137
+ metrics:
138
+ - ROUGE
139
+ - Other
140
+ original_task: true
141
+ name: generate_subject
142
+ reference: Instruct to generate
143
+ dca29ebb-2372-423f-b93c-21d99eddf455: !Template
144
+ answer_choices: null
145
+ id: dca29ebb-2372-423f-b93c-21d99eddf455
146
+ jinja: '{{ email_body }}
147
+
148
+
149
+ The above text is the content of an email. What is the topic of this email?
150
+ |||
151
+
152
+
153
+ {{ subject_line }} '
154
+ metadata: !TemplateMetadata
155
+ choices_in_prompt: false
156
+ languages:
157
+ - en
158
+ metrics:
159
+ - ROUGE
160
+ - Other
161
+ original_task: true
162
+ name: what_topic
163
+ reference: ''
promptsource/templates/ag_news/templates.yaml ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ag_news
2
+ templates:
3
+ 24e44a81-a18a-42dd-a71c-5b31b2d2cb39: !Template
4
+ answer_choices: World politics ||| Sports ||| Business ||| Science and technology
5
+ id: 24e44a81-a18a-42dd-a71c-5b31b2d2cb39
6
+ jinja: "What label best describes this news article?\n{{text}} ||| \n{{answer_choices[label]\
7
+ \ }}"
8
+ metadata: !TemplateMetadata
9
+ choices_in_prompt: false
10
+ languages:
11
+ - en
12
+ metrics:
13
+ - Accuracy
14
+ original_task: true
15
+ name: classify_question_first
16
+ reference: ''
17
+ 8fdc1056-1029-41a1-9c67-354fc2b8ceaf: !Template
18
+ answer_choices: World politics ||| Sports ||| Business ||| Science and technology
19
+ id: 8fdc1056-1029-41a1-9c67-354fc2b8ceaf
20
+ jinja: "Is this a piece of news regarding {{\"world politics, sports, business,\
21
+ \ or science and technology\"}}?\n{{text}} \n||| \n{{answer_choices[label] }}"
22
+ metadata: !TemplateMetadata
23
+ choices_in_prompt: true
24
+ languages:
25
+ - en
26
+ metrics:
27
+ - Accuracy
28
+ original_task: true
29
+ name: classify_with_choices_question_first
30
+ reference: ''
31
+ 918267e0-af68-4117-892d-2dbe66a58ce9: !Template
32
+ answer_choices: Politician ||| Athlete ||| Business executive ||| Scientist
33
+ id: 918267e0-af68-4117-892d-2dbe66a58ce9
34
+ jinja: 'Would you recommend the following article to a {{"politician"}}, an {{"athlete"}},
35
+ a {{"business executive"}}, or a {{"scientist"}}?
36
+
37
+
38
+ {{ text }}
39
+
40
+ |||
41
+
42
+ {{answer_choices[label]}}'
43
+ metadata: !TemplateMetadata
44
+ choices_in_prompt: true
45
+ languages:
46
+ - en
47
+ metrics:
48
+ - Accuracy
49
+ original_task: true
50
+ name: recommend
51
+ reference: ''
52
+ 9345df33-4f23-4944-a33c-eef94e626862: !Template
53
+ answer_choices: World News ||| Sports ||| Business ||| Science and Technology
54
+ id: 9345df33-4f23-4944-a33c-eef94e626862
55
+ jinja: "{{text}} \n\nWhich of the following sections of a newspaper would this\
56
+ \ article likely appear in? {{\"World News\"}}, {{\"Sports\"}}, {{\"Business\"\
57
+ }}, or {{\"Science and Technology\"}}? ||| \n{{answer_choices[label] }}"
58
+ metadata: !TemplateMetadata
59
+ choices_in_prompt: true
60
+ languages:
61
+ - en
62
+ metrics:
63
+ - Accuracy
64
+ original_task: true
65
+ name: which_section_choices
66
+ reference: ''
67
+ 98534347-fff7-4c39-a795-4e69a44791f7: !Template
68
+ answer_choices: World News ||| Sports ||| Business ||| Science and Technology
69
+ id: 98534347-fff7-4c39-a795-4e69a44791f7
70
+ jinja: "{{text}} \n\nWhich section of a newspaper would this article likely appear\
71
+ \ in? ||| \n{{answer_choices[label] }}"
72
+ metadata: !TemplateMetadata
73
+ choices_in_prompt: false
74
+ languages:
75
+ - en
76
+ metrics:
77
+ - Accuracy
78
+ original_task: true
79
+ name: which_section
80
+ reference: ''
81
+ b401b0ee-6ffe-4a91-8e15-77ee073cd858: !Template
82
+ answer_choices: World politics ||| Sports ||| Business ||| Science and technology
83
+ id: b401b0ee-6ffe-4a91-8e15-77ee073cd858
84
+ jinja: "{{text}} \nIs this a piece of news regarding {{\"world politics, sports,\
85
+ \ business, or science and technology\"}}? ||| \n{{answer_choices[label] }}"
86
+ metadata: !TemplateMetadata
87
+ choices_in_prompt: true
88
+ languages:
89
+ - en
90
+ metrics:
91
+ - Accuracy
92
+ original_task: true
93
+ name: classify_with_choices
94
+ reference: ''
95
+ cb355f33-7e8c-4455-a72b-48d315bd4f60: !Template
96
+ answer_choices: World politics ||| Sports ||| Business ||| Science and technology
97
+ id: cb355f33-7e8c-4455-a72b-48d315bd4f60
98
+ jinja: "{{text}} \nWhat label best describes this news article? ||| \n{{answer_choices[label]\
99
+ \ }}"
100
+ metadata: !TemplateMetadata
101
+ choices_in_prompt: false
102
+ languages:
103
+ - en
104
+ metrics:
105
+ - Accuracy
106
+ original_task: true
107
+ name: classify
108
+ reference: ''
promptsource/templates/ai2_arc/ARC-Challenge/templates.yaml ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ai2_arc
2
+ subset: ARC-Challenge
3
+ templates:
4
+ 32f7eb4d-dd38-4503-b67d-a8a96ab40449: !Template
5
+ answer_choices: null
6
+ id: 32f7eb4d-dd38-4503-b67d-a8a96ab40449
7
+ jinja: 'Pick and copy all the incorrect options for the following question:
8
+
9
+
10
+ {{question}}
11
+
12
+
13
+ Options:
14
+
15
+ - {{choices["text"] | join("\n- ")}}|||
16
+
17
+ {% for i in range(choices["label"]|length) %}
18
+
19
+ {% if i != choices["label"].index(answerKey) %}
20
+
21
+ - {{choices["text"][i]}}
22
+
23
+ {% endif %}
24
+
25
+ {% endfor %}'
26
+ metadata: !TemplateMetadata
27
+ choices_in_prompt: true
28
+ languages:
29
+ - en
30
+ metrics:
31
+ - Accuracy
32
+ - Other
33
+ original_task: false
34
+ name: pick_false_options
35
+ reference: ''
36
+ 540ebc31-2ea6-4feb-a6fd-67b6e71cf20a: !Template
37
+ answer_choices: '{{choices.label | join("|||")}}'
38
+ id: 540ebc31-2ea6-4feb-a6fd-67b6e71cf20a
39
+ jinja: "Here's a problem to solve: {{question}}\n\nAmong the 4 following options,\
40
+ \ which is the correct answer?\n{% for letter, t in zip(answer_choices, choices.text)\
41
+ \ %}\n- {{letter}}: {{t}}\n {% endfor %}|||{{answerKey}}"
42
+ metadata: !TemplateMetadata
43
+ choices_in_prompt: true
44
+ languages:
45
+ - en
46
+ metrics:
47
+ - Accuracy
48
+ original_task: true
49
+ name: heres_a_problem
50
+ reference: ''
51
+ 5ec2b8ca-e4c0-444e-b097-89ccce811550: !Template
52
+ answer_choices: '{{choices.text | join("|||")}}'
53
+ id: 5ec2b8ca-e4c0-444e-b097-89ccce811550
54
+ jinja: '{{question}}
55
+
56
+
57
+ Options:
58
+
59
+ - {{answer_choices | join("\n- ")}}|||
60
+
61
+ {{answer_choices[choices["label"].index(answerKey)]}}'
62
+ metadata: !TemplateMetadata
63
+ choices_in_prompt: true
64
+ languages:
65
+ - en
66
+ metrics:
67
+ - Accuracy
68
+ original_task: true
69
+ name: qa_options
70
+ reference: ''
71
+ 5ff84886-9d5f-40d1-80d7-2a39b7c16ec6: !Template
72
+ answer_choices: '{{choices.text | join("|||")}}'
73
+ id: 5ff84886-9d5f-40d1-80d7-2a39b7c16ec6
74
+ jinja: 'I am hesitating between 4 options to answer the following question, which
75
+ option should I choose?
76
+
77
+ Question: {{question}}
78
+
79
+ Possibilities:
80
+
81
+ - {{answer_choices | join("\n- ")}}|||
82
+
83
+ {{answer_choices[choices["label"].index(answerKey)]}}'
84
+ metadata: !TemplateMetadata
85
+ choices_in_prompt: true
86
+ languages:
87
+ - en
88
+ metrics:
89
+ - Accuracy
90
+ original_task: true
91
+ name: i_am_hesitating
92
+ reference: ''
93
+ ced2b33b-b590-4522-b041-51d7dd669561: !Template
94
+ answer_choices: '{{choices.text | join("|||")}}'
95
+ id: ced2b33b-b590-4522-b041-51d7dd669561
96
+ jinja: 'I gave my students this multiple choice question: {{question}}
97
+
98
+
99
+ Only one answer is correct among these 4 choices:
100
+
101
+ - {{answer_choices | join("\n- ")}}
102
+
103
+
104
+ Could you tell me which one is correct?|||
105
+
106
+ {{answer_choices[choices["label"].index(answerKey)]}}'
107
+ metadata: !TemplateMetadata
108
+ choices_in_prompt: true
109
+ languages:
110
+ - en
111
+ metrics:
112
+ - Accuracy
113
+ original_task: true
114
+ name: multiple_choice
115
+ reference: ''
116
+ e371fc1a-8edb-477b-b345-9d73e97ffade: !Template
117
+ answer_choices: '{{choices.label | join("|||")}}'
118
+ id: e371fc1a-8edb-477b-b345-9d73e97ffade
119
+ jinja: 'Pick the most correct option to answer the following question.
120
+
121
+
122
+ {{question}}
123
+
124
+
125
+ Options:
126
+
127
+ {% for letter, t in zip(answer_choices, choices.text) %}
128
+
129
+ - {{letter}}: {{t}}
130
+
131
+ {% endfor %} |||
132
+
133
+ {{answerKey}}'
134
+ metadata: !TemplateMetadata
135
+ choices_in_prompt: true
136
+ languages:
137
+ - en
138
+ metrics:
139
+ - Accuracy
140
+ original_task: true
141
+ name: pick_the_most_correct_option
142
+ reference: ''
promptsource/templates/ai2_arc/ARC-Easy/templates.yaml ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ai2_arc
2
+ subset: ARC-Easy
3
+ templates:
4
+ 033498ca-3d9a-47e3-b631-d881ab53b5ad: !Template
5
+ answer_choices: '{{choices.label | join("|||")}}'
6
+ id: 033498ca-3d9a-47e3-b631-d881ab53b5ad
7
+ jinja: 'Pick the most correct option to answer the following question.
8
+
9
+
10
+ {{question}}
11
+
12
+
13
+ Options:
14
+
15
+ {% for letter, t in zip(answer_choices, choices.text) %}
16
+
17
+ - {{letter}}: {{t}}
18
+
19
+ {% endfor %} |||
20
+
21
+ {{answerKey}}'
22
+ metadata: !TemplateMetadata
23
+ choices_in_prompt: true
24
+ languages:
25
+ - en
26
+ metrics:
27
+ - Accuracy
28
+ original_task: true
29
+ name: pick_the_most_correct_option
30
+ reference: ''
31
+ 252aa566-9482-4e81-aad9-664a9bebd8e8: !Template
32
+ answer_choices: '{{choices.text | join("|||")}}'
33
+ id: 252aa566-9482-4e81-aad9-664a9bebd8e8
34
+ jinja: '{{question}}
35
+
36
+
37
+ Options:
38
+
39
+ - {{answer_choices | join("\n- ")}}|||
40
+
41
+ {{answer_choices[choices["label"].index(answerKey)]}}'
42
+ metadata: !TemplateMetadata
43
+ choices_in_prompt: true
44
+ languages:
45
+ - en
46
+ metrics:
47
+ - Accuracy
48
+ original_task: true
49
+ name: qa_options
50
+ reference: ''
51
+ 4fb13ac1-f770-45ea-b5d5-91ac50b0d609: !Template
52
+ answer_choices: '{{choices.text | join("|||")}}'
53
+ id: 4fb13ac1-f770-45ea-b5d5-91ac50b0d609
54
+ jinja: 'I am hesitating between 4 options to answer the following question, which
55
+ option should I choose?
56
+
57
+ Question: {{question}}
58
+
59
+ Possibilities:
60
+
61
+ - {{answer_choices | join("\n- ")}}|||
62
+
63
+ {{answer_choices[choices["label"].index(answerKey)]}}'
64
+ metadata: !TemplateMetadata
65
+ choices_in_prompt: true
66
+ languages:
67
+ - en
68
+ metrics:
69
+ - Accuracy
70
+ original_task: true
71
+ name: i_am_hesitating
72
+ reference: ''
73
+ 8c689423-880d-402b-8c7d-a1a98c7589e8: !Template
74
+ answer_choices: '{{choices.text | join("|||")}}'
75
+ id: 8c689423-880d-402b-8c7d-a1a98c7589e8
76
+ jinja: 'I gave my students this multiple choice question: {{question}}
77
+
78
+
79
+ Only one answer is correct among these 4 choices:
80
+
81
+ - {{answer_choices | join("\n- ")}}
82
+
83
+
84
+ Could you tell me which one is correct?|||
85
+
86
+ {{answer_choices[choices["label"].index(answerKey)]}}'
87
+ metadata: !TemplateMetadata
88
+ choices_in_prompt: true
89
+ languages:
90
+ - en
91
+ metrics:
92
+ - Accuracy
93
+ original_task: true
94
+ name: multiple_choice
95
+ reference: ''
96
+ c988ee30-a523-457b-af21-87353349b543: !Template
97
+ answer_choices: null
98
+ id: c988ee30-a523-457b-af21-87353349b543
99
+ jinja: 'Pick and copy all the incorrect options for the following question:
100
+
101
+
102
+ {{question}}
103
+
104
+
105
+ Options:
106
+
107
+ - {{choices["text"] | join("\n- ")}}|||
108
+
109
+ {% for i in range(choices["label"]|length) %}
110
+
111
+ {% if i != choices["label"].index(answerKey) %}
112
+
113
+ - {{choices["text"][i]}}
114
+
115
+ {% endif %}
116
+
117
+ {% endfor %}'
118
+ metadata: !TemplateMetadata
119
+ choices_in_prompt: true
120
+ languages:
121
+ - en
122
+ metrics:
123
+ - Accuracy
124
+ - Other
125
+ original_task: false
126
+ name: pick_false_options
127
+ reference: ''
128
+ d90da519-0e2c-4f9b-a546-7cba82824eb2: !Template
129
+ answer_choices: '{{choices.label | join("|||")}}'
130
+ id: d90da519-0e2c-4f9b-a546-7cba82824eb2
131
+ jinja: "Here's a problem to solve: {{question}}\n\nAmong the 4 following options,\
132
+ \ which is the correct answer?\n{% for letter, t in zip(answer_choices, choices.text)\
133
+ \ %}\n- {{letter}}: {{t}}\n {% endfor %}|||{{answerKey}}"
134
+ metadata: !TemplateMetadata
135
+ choices_in_prompt: true
136
+ languages:
137
+ - en
138
+ metrics:
139
+ - Accuracy
140
+ original_task: true
141
+ name: heres_a_problem
142
+ reference: ''
promptsource/templates/amazon_polarity/templates.yaml ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: amazon_polarity
2
+ templates:
3
+ 1e90a24a-1182-43dd-9445-22f2e56e5761: !Template
4
+ answer_choices: Negative ||| Positive
5
+ id: 1e90a24a-1182-43dd-9445-22f2e56e5761
6
+ jinja: 'Title: {{title}}
7
+
8
+ Review: {{content}}
9
+
10
+ Is the review positive or negative? |||
11
+
12
+ {{answer_choices[label]}}'
13
+ metadata: !TemplateMetadata
14
+ choices_in_prompt: true
15
+ languages:
16
+ - en
17
+ metrics:
18
+ - Accuracy
19
+ original_task: true
20
+ name: Is_this_review
21
+ reference: ''
22
+ 3a48f287-6a4b-4df0-ab2d-2eaf6cb8e53d: !Template
23
+ answer_choices: No ||| Yes
24
+ id: 3a48f287-6a4b-4df0-ab2d-2eaf6cb8e53d
25
+ jinja: 'Based on this review, would the user recommend this product?
26
+
27
+ ===
28
+
29
+ Review: {{content}}
30
+
31
+ Answer: |||
32
+
33
+ {{answer_choices[label]}}'
34
+ metadata: !TemplateMetadata
35
+ choices_in_prompt: false
36
+ languages:
37
+ - en
38
+ metrics:
39
+ - Accuracy
40
+ original_task: true
41
+ name: User_recommend_this_product
42
+ reference: 'Reformulation equivalent to sent analysis: would the user recommend
43
+ this product?'
44
+ 592caf8f-f8ff-426a-a61b-b7e95ed510b6: !Template
45
+ answer_choices: No ||| Yes
46
+ id: 592caf8f-f8ff-426a-a61b-b7e95ed510b6
47
+ jinja: 'Is this product review positive?
48
+
49
+ Title: {{title}}
50
+
51
+ Review: {{content}}
52
+
53
+ Answer: |||
54
+
55
+ {{answer_choices[label]}}'
56
+ metadata: !TemplateMetadata
57
+ choices_in_prompt: false
58
+ languages:
59
+ - en
60
+ metrics:
61
+ - Accuracy
62
+ original_task: true
63
+ name: Is_this_product_review_positive
64
+ reference: ''
65
+ 745b9c05-10df-4a7e-81ad-1b88cefcb166: !Template
66
+ answer_choices: Yes ||| No
67
+ id: 745b9c05-10df-4a7e-81ad-1b88cefcb166
68
+ jinja: 'Title: {{title}}
69
+
70
+ Review: {{content}}
71
+
72
+ Is this product review negative?|||
73
+
74
+ {{answer_choices[label]}}'
75
+ metadata: !TemplateMetadata
76
+ choices_in_prompt: false
77
+ languages:
78
+ - en
79
+ metrics:
80
+ - Accuracy
81
+ original_task: true
82
+ name: Is_this_review_negative
83
+ reference: ''
84
+ 8abb5377-5dd3-4402-92a5-0d81adb6a325: !Template
85
+ answer_choices: Negative ||| Positive
86
+ id: 8abb5377-5dd3-4402-92a5-0d81adb6a325
87
+ jinja: 'Title: {{title}}
88
+
89
+ Review: {{content}}
90
+
91
+ Does this product review convey a negative or positive sentiment?|||
92
+
93
+ {{answer_choices[label]}}'
94
+ metadata: !TemplateMetadata
95
+ choices_in_prompt: true
96
+ languages:
97
+ - en
98
+ metrics:
99
+ - Accuracy
100
+ original_task: true
101
+ name: convey_negative_or_positive_sentiment
102
+ reference: ''
103
+ 9df70cdf-f8ed-4e79-8e2f-b4668058d637: !Template
104
+ answer_choices: Negative ||| Positive
105
+ id: 9df70cdf-f8ed-4e79-8e2f-b4668058d637
106
+ jinja: 'Is there a negative or positive tone to this product review?
107
+
108
+ ===
109
+
110
+ Title: {{title}}
111
+
112
+ Review: {{content}}
113
+
114
+ Answer: |||
115
+
116
+ {{answer_choices[label]}}'
117
+ metadata: !TemplateMetadata
118
+ choices_in_prompt: true
119
+ languages:
120
+ - en
121
+ metrics:
122
+ - Accuracy
123
+ original_task: true
124
+ name: negative_or_positive_tone
125
+ reference: ''
126
+ b13369e8-0500-4e93-90d4-8e6814bfb97b: !Template
127
+ answer_choices: dissatisfied ||| satisfied
128
+ id: b13369e8-0500-4e93-90d4-8e6814bfb97b
129
+ jinja: 'Here is a review left by a customer on a product. Would you say he was
130
+ {{answer_choices[1]}} or {{answer_choices[0]}}?
131
+
132
+ Title: {{title}}
133
+
134
+ Review: {{content}}
135
+
136
+ |||
137
+
138
+ {{answer_choices[label]}} '
139
+ metadata: !TemplateMetadata
140
+ choices_in_prompt: true
141
+ languages:
142
+ - en
143
+ metrics:
144
+ - Accuracy
145
+ original_task: true
146
+ name: user_satisfied
147
+ reference: ''
148
+ b13369e8-0500-4e93-90d4-8e6814bfb98b: !Template
149
+ answer_choices: decrease ||| increase
150
+ id: b13369e8-0500-4e93-90d4-8e6814bfb98b
151
+ jinja: 'You are considering whether to buy a product. You look at the reviews.
152
+ Would the following review {{answer_choices[0]}} or {{answer_choices[1]}} the
153
+ chances of you buying the product?
154
+
155
+ Review title: {{title}}
156
+
157
+ Product review: {{content}}
158
+
159
+ |||
160
+
161
+ {{answer_choices[label]}} '
162
+ metadata: !TemplateMetadata
163
+ choices_in_prompt: true
164
+ languages:
165
+ - en
166
+ metrics:
167
+ - Accuracy
168
+ original_task: true
169
+ name: would_you_buy
170
+ reference: ''
171
+ b13369e8-0500-4e93-90d4-8e6814bfb99b: !Template
172
+ answer_choices: unflattering ||| flattering
173
+ id: b13369e8-0500-4e93-90d4-8e6814bfb99b
174
+ jinja: 'Title: {{title}}
175
+
176
+ Product review: {{content}}
177
+
178
+ Would you say this review depicts the product in a {{answer_choices[1]}} or
179
+ {{answer_choices[0]}} light?
180
+
181
+ |||
182
+
183
+ {{answer_choices[label]}} '
184
+ metadata: !TemplateMetadata
185
+ choices_in_prompt: true
186
+ languages:
187
+ - en
188
+ metrics:
189
+ - Accuracy
190
+ original_task: true
191
+ name: flattering_or_not
192
+ reference: ''
promptsource/templates/amazon_reviews_multi/en/templates.yaml ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: amazon_reviews_multi
2
+ subset: en
3
+ templates:
4
+ 073dfd34-5aef-461a-81d9-bdb8e00f12c9: !Template
5
+ answer_choices: null
6
+ id: 073dfd34-5aef-461a-81d9-bdb8e00f12c9
7
+ jinja: 'Write a title for the review below:
8
+
9
+ ===
10
+
11
+ {{review_body}} |||
12
+
13
+ {{review_title}}'
14
+ metadata: !TemplateMetadata
15
+ choices_in_prompt: false
16
+ languages:
17
+ - en
18
+ metrics:
19
+ - BLEU
20
+ - ROUGE
21
+ original_task: false
22
+ name: generate_title
23
+ reference: Review Title based on Review body
24
+ 0f5b005b-c6bc-4fe0-bde4-0917cdba39e8: !Template
25
+ answer_choices: 1|||2|||3|||4|||5
26
+ id: 0f5b005b-c6bc-4fe0-bde4-0917cdba39e8
27
+ jinja: 'Rate the product by the number of stars based on the review title below:
28
+ (1 being the lowest and 5 the highest)
29
+
30
+ ===
31
+
32
+ {{review_title}} |||
33
+
34
+ {{answer_choices[stars-1]}}'
35
+ metadata: !TemplateMetadata
36
+ choices_in_prompt: false
37
+ languages:
38
+ - en
39
+ metrics:
40
+ - Accuracy
41
+ - Other
42
+ original_task: false
43
+ name: prompt_title_to_star
44
+ reference: Rating based on review title
45
+ 199ad6de-5bcc-421e-90e2-4b6edada6a01: !Template
46
+ answer_choices: 1|||2|||3|||4|||5
47
+ id: 199ad6de-5bcc-421e-90e2-4b6edada6a01
48
+ jinja: 'Rate the product by the number of stars based on the review body below:
49
+ (1 being the lowest and 5 the highest)
50
+
51
+ ===
52
+
53
+ {{review_body}} |||
54
+
55
+ {{answer_choices[stars-1]}}'
56
+ metadata: !TemplateMetadata
57
+ choices_in_prompt: false
58
+ languages:
59
+ - en
60
+ metrics:
61
+ - Accuracy
62
+ - Other
63
+ original_task: true
64
+ name: prompt_review_to_star
65
+ reference: Rating based on review body
66
+ 37806754-58f7-4383-961a-fe2c88109fcd: !Template
67
+ answer_choices: 1|||2|||3|||4|||5
68
+ id: 37806754-58f7-4383-961a-fe2c88109fcd
69
+ jinja: 'Rate the product by the number of stars based on the review below: (1
70
+ being the lowest and 5 the highest)
71
+
72
+ ===
73
+
74
+ {{review_title}}. {{review_body}} |||
75
+
76
+ {{answer_choices[stars-1]}}'
77
+ metadata: !TemplateMetadata
78
+ choices_in_prompt: false
79
+ languages:
80
+ - en
81
+ metrics:
82
+ - Accuracy
83
+ - Other
84
+ original_task: true
85
+ name: prompt_body_title_to_star
86
+ reference: Rating based on review body,title
87
+ 7ecaf718-c85d-47f4-83cb-f14c58f2911f: !Template
88
+ answer_choices: null
89
+ id: 7ecaf718-c85d-47f4-83cb-f14c58f2911f
90
+ jinja: 'Guess the product category from the following review:
91
+
92
+ ===
93
+
94
+ {{review_body}} |||
95
+
96
+ {{product_category}}'
97
+ metadata: !TemplateMetadata
98
+ choices_in_prompt: false
99
+ languages:
100
+ - en
101
+ metrics:
102
+ - Accuracy
103
+ - Other
104
+ original_task: false
105
+ name: prompt_review_to_category
106
+ reference: Product category based on review body
107
+ 8e8973f6-431f-4e78-b83a-a86c04655882: !Template
108
+ answer_choices: 1|||2|||3|||4|||5
109
+ id: 8e8973f6-431f-4e78-b83a-a86c04655882
110
+ jinja: 'Rate the product by the number of stars based on the review below: (1
111
+ being the lowest and 5 the highest)
112
+
113
+ ===
114
+
115
+ {{review_title}}. {{review_body}} Product category: {{product_category}}|||
116
+
117
+ {{answer_choices[stars-1]}}'
118
+ metadata: !TemplateMetadata
119
+ choices_in_prompt: false
120
+ languages:
121
+ - en
122
+ metrics:
123
+ - Accuracy
124
+ - Other
125
+ original_task: true
126
+ name: prompt_body_title_category_to_star
127
+ reference: Rating based on review body, title, category
128
+ c4717e75-4d3e-4b79-9737-167155f51513: !Template
129
+ answer_choices: null
130
+ id: c4717e75-4d3e-4b79-9737-167155f51513
131
+ jinja: 'Guess the product category from the review title below:
132
+
133
+ ===
134
+
135
+ {{review_title}} |||
136
+
137
+ {{product_category}}'
138
+ metadata: !TemplateMetadata
139
+ choices_in_prompt: false
140
+ languages:
141
+ - en
142
+ metrics:
143
+ - Accuracy
144
+ - Other
145
+ original_task: false
146
+ name: prompt_title_to_product_category
147
+ reference: Product category from review title
promptsource/templates/amazon_us_reviews/Wireless_v1_00/templates.yaml ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: amazon_us_reviews
2
+ subset: Wireless_v1_00
3
+ templates:
4
+ 5feaa0d7-e4e0-46cc-8517-e00bfa7fd00e: !Template
5
+ answer_choices: null
6
+ id: 5feaa0d7-e4e0-46cc-8517-e00bfa7fd00e
7
+ jinja: "Give a short sentence describing the following product review:\n{{review_body}}\
8
+ \ \n|||\n{{review_headline}}"
9
+ metadata: !TemplateMetadata
10
+ choices_in_prompt: false
11
+ languages:
12
+ - en
13
+ metrics:
14
+ - ROUGE
15
+ - BLEU
16
+ original_task: false
17
+ name: Generate review headline based on review body
18
+ reference: Generate review headline based on review body
19
+ 9588a967-d698-4a33-9b96-a5254df9d260: !Template
20
+ answer_choices: null
21
+ id: 9588a967-d698-4a33-9b96-a5254df9d260
22
+ jinja: Generate a {{star_rating}}-star review (1 being lowest and 5 being highest)
23
+ about this product {{product_title}}. ||| {{review_body}}
24
+ metadata: !TemplateMetadata
25
+ choices_in_prompt: false
26
+ languages:
27
+ - en
28
+ metrics:
29
+ - BLEU
30
+ - ROUGE
31
+ original_task: false
32
+ name: Generate review based on rating and category
33
+ reference: Generate review based on rating and category
34
+ 9a8b953d-2c68-4046-a7b7-8fd5f7469d10: !Template
35
+ answer_choices: '1 ||| 2 ||| 3 ||| 4 ||| 5 '
36
+ id: 9a8b953d-2c68-4046-a7b7-8fd5f7469d10
37
+ jinja: "Given the following review headline \n{{review_headline}}\npredict the\
38
+ \ the associated rating from the following choices\n- {{ answer_choices | join('\\\
39
+ n- ') }} \n(1 being lowest and 5 being highest)\n|||\n{{answer_choices[star_rating-1]}}"
40
+ metadata: !TemplateMetadata
41
+ choices_in_prompt: true
42
+ languages:
43
+ - en
44
+ metrics:
45
+ - Accuracy
46
+ original_task: true
47
+ name: Given the review headline return a categorical rating
48
+ reference: 'Given the review headline, return a categorical rating. '
49
+ e40e4a53-ca5d-4fc8-a7c3-be9adfe0dbec: !Template
50
+ answer_choices: null
51
+ id: e40e4a53-ca5d-4fc8-a7c3-be9adfe0dbec
52
+ jinja: "Generate a {{star_rating}}-star review headline (1 being lowest and 5\
53
+ \ being highest) about this product: \n{{product_title}} \n||| \
54
+ \ \n{{review_headline}}"
55
+ metadata: !TemplateMetadata
56
+ choices_in_prompt: false
57
+ languages:
58
+ - en
59
+ metrics:
60
+ - BLEU
61
+ - ROUGE
62
+ original_task: false
63
+ name: Generate review headline based on rating
64
+ reference: 'Generate review headline based on rating. '
65
+ e6a1bbde-715d-4dad-9178-e2bcfaf5c646: !Template
66
+ answer_choices: 1 ||| 2 ||| 3 ||| 4 ||| 5
67
+ id: e6a1bbde-715d-4dad-9178-e2bcfaf5c646
68
+ jinja: "Given the following review:\n{{review_body}}\npredict the associated rating\
69
+ \ from the following choices (1 being lowest and 5 being highest)\n- {{ answer_choices\
70
+ \ | join('\\n- ') }} \n|||\n{{answer_choices[star_rating-1]}}"
71
+ metadata: !TemplateMetadata
72
+ choices_in_prompt: true
73
+ languages:
74
+ - en
75
+ metrics:
76
+ - Accuracy
77
+ original_task: true
78
+ name: Given the review body return a categorical rating
79
+ reference: 'Given the review body, return a categorical rating. '
promptsource/templates/ambig_qa/light/templates.yaml ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: ambig_qa
2
+ subset: light
3
+ templates:
4
+ 050b1534-b53f-4341-b42c-6e689ef8911b: !Template
5
+ answer_choices: null
6
+ id: 050b1534-b53f-4341-b42c-6e689ef8911b
7
+ jinja: "{# Assignement in if clause breaks test, we need to declare variables\
8
+ \ in global scope first: https://github.com/pallets/jinja/issues/1314 #}\n{%\
9
+ \ set selected_question = \"\" %}\n{% set selected_answer = \"\" %}\n{% set\
10
+ \ random_question_id = -1 %}\n{% if annotations.type[0] == \"multipleQAs\" %}\n\
11
+ \ {% set random_question_id = range(0, annotations.qaPairs[0].question | length)\
12
+ \ | choice%}\n {% set selected_question = annotations.qaPairs[0].question[random_question_id]%}\n\
13
+ \ {% set selected_answer = annotations.qaPairs[0].answer[random_question_id]\
14
+ \ | choice%}\n{% else %}\n {% set selected_question = question %}\n {% set\
15
+ \ selected_answer = annotations.answer[0] | choice %}\n{% endif %}\n\nHere's\
16
+ \ a question-answer pair: {{question}} {{selected_answer}}.\nIs the question\
17
+ \ ambiguous? If so, generate a better question suitable for the answer. Otherwise,\
18
+ \ output the same question.\n|||\n{{selected_question}}"
19
+ metadata: !TemplateMetadata
20
+ choices_in_prompt: false
21
+ languages:
22
+ - en
23
+ metrics:
24
+ - BLEU
25
+ - Edit Distance
26
+ original_task: false
27
+ name: is_question_ambiguous
28
+ reference: ''
29
+ 09880e1a-0fcc-49dc-8462-b6603e15d691: !Template
30
+ answer_choices: null
31
+ id: 09880e1a-0fcc-49dc-8462-b6603e15d691
32
+ jinja: "What are the possible answers to the question \"{{question}}\"? Use semi-colons\
33
+ \ to separate your answers if you have multiple answers.\n\n|||\n\n{# Assignement\
34
+ \ in if clause breaks test, we need to declare variables in global scope first:\
35
+ \ https://github.com/pallets/jinja/issues/1314 #}\n{% set random_answer = \"\
36
+ \" %}\n{% set random_answer_form = \"\" %}\n{% if annotations.type[0] == \"\
37
+ singleAnswer\" %}\n {% set random_answer_form = [] %}\n {% for possible_answer\
38
+ \ in annotations.answer[0] %}\n {{ random_answer_form.append(possible_answer\
39
+ \ ) or \"\"}}\n {% endfor %}\n{% else %}\n {% set random_answer_form =\
40
+ \ [] %}\n {% for possible_answers in annotations.qaPairs[0].answer %}\n \
41
+ \ {% for possible_answer in possible_answers %}\n {{ random_answer_form.append(possible_answer\
42
+ \ ) or \"\"}}\n {% endfor %}\n {% endfor %}\n{% endif %}\n\n{{random_answer_form\
43
+ \ | join(\"; \")}}"
44
+ metadata: !TemplateMetadata
45
+ choices_in_prompt: false
46
+ languages:
47
+ - en
48
+ metrics:
49
+ - Other
50
+ original_task: true
51
+ name: answer_prediction_all_answers_interrogative
52
+ reference: ''
53
+ 45b20de4-a3c1-4e76-ad79-06d7c8c66009: !Template
54
+ answer_choices: null
55
+ id: 45b20de4-a3c1-4e76-ad79-06d7c8c66009
56
+ jinja: "Given the question \"{{question}}\", generate all the possible answers,\
57
+ \ separated by semi-colon.\n\n|||\n\n{# Assignement in if clause breaks test,\
58
+ \ we need to declare variables in global scope first: https://github.com/pallets/jinja/issues/1314\
59
+ \ #}\n{% set random_answer = \"\" %}\n{% set random_answer_form = \"\" %}\n\
60
+ {% if annotations.type[0] == \"singleAnswer\" %}\n {% set random_answer_form\
61
+ \ = [] %}\n {% for possible_answer in annotations.answer[0] %}\n {{ random_answer_form.append(possible_answer\
62
+ \ ) or \"\"}}\n {% endfor %}\n{% else %}\n {% set random_answer_form =\
63
+ \ [] %}\n {% for possible_answers in annotations.qaPairs[0].answer %}\n \
64
+ \ {% for possible_answer in possible_answers %}\n {{ random_answer_form.append(possible_answer\
65
+ \ ) or \"\"}}\n {% endfor %}\n {% endfor %}\n{% endif %}\n\n{{random_answer_form\
66
+ \ | join(\"; \")}}"
67
+ metadata: !TemplateMetadata
68
+ choices_in_prompt: false
69
+ languages:
70
+ - en
71
+ metrics:
72
+ - Other
73
+ original_task: true
74
+ name: answer_prediction_all_answers_affirmative
75
+ reference: ''
76
+ 72bf511b-44ce-4b9f-a2d0-5ed6334f0e07: !Template
77
+ answer_choices: Yes ||| No
78
+ id: 72bf511b-44ce-4b9f-a2d0-5ed6334f0e07
79
+ jinja: "{# Assignement in if clause breaks test, we need to declare variables\
80
+ \ in global scope first: https://github.com/pallets/jinja/issues/1314 #}\n{%\
81
+ \ set random_question_id = -1 %}\n{% set random_answer_id = -1 %}\n{% set selected_question\
82
+ \ = \"\" %}\n{% set selected_answer = \"\" %}\n{% if annotations.type[0] ==\
83
+ \ \"multipleQAs\" %}\n {% set random_question_id = range(0, annotations.qaPairs[0].question\
84
+ \ | length) | choice%}\n {% set random_answer_id = range(0, annotations.qaPairs[0].answer\
85
+ \ | length) | choice%}\n {% set selected_question = annotations.qaPairs[0].question[random_question_id]\
86
+ \ %}\n {% set selected_answer = annotations.qaPairs[0].answer[random_answer_id]\
87
+ \ | choice%}\n{% else %}\n {% set random_question_id = 0 %}\n {% set random_answer_id\
88
+ \ = 0 %}\n {% set selected_question = question %}\n {% set selected_answer\
89
+ \ = annotations.answer[0] | choice %}\n{% endif %}\n\nIs \"{{selected_answer}}\"\
90
+ \ an acceptable answer to \"{{selected_question}}\"? {{answer_choices[0]}} or\
91
+ \ {{answer_choices[1].lower()}}?\n\n|||\n\n{% if random_answer_id == random_question_id\
92
+ \ %} {{answer_choices[0]}} {% else %} {{answer_choices[1]}} {% endif %}"
93
+ metadata: !TemplateMetadata
94
+ choices_in_prompt: true
95
+ languages:
96
+ - en
97
+ metrics:
98
+ - Accuracy
99
+ original_task: false
100
+ name: answer_prediction_yes_or_no
101
+ reference: Classify if the given answer if correct compared to the chosen question
102
+ bb089312-23cb-475d-93b5-952781bc6be4: !Template
103
+ answer_choices: null
104
+ id: bb089312-23cb-475d-93b5-952781bc6be4
105
+ jinja: "{# Assignement in if clause breaks test, we need to declare variables\
106
+ \ in global scope first: https://github.com/pallets/jinja/issues/1314 #}\n{%\
107
+ \ set selected_question = \"\" %}\n{% set selected_answer = \"\" %}\n{% set\
108
+ \ random_question_id = -1 %}\n{% if annotations.type[0] == \"multipleQAs\" %}\n\
109
+ \ {% set random_question_id = range(0, annotations.qaPairs[0].question | length)\
110
+ \ | choice%}\n {% set selected_question = annotations.qaPairs[0].question[random_question_id]%}\n\
111
+ \ {% set selected_answer = annotations.qaPairs[0].answer[random_question_id]\
112
+ \ | choice%}\n{% else %}\n {% set selected_question = question %}\n {% set\
113
+ \ selected_answer = annotations.answer[0] | choice %}\n{% endif %}\n\nQuestion:\
114
+ \ {{question}}\nAnswer: {{selected_answer}}\n\nKnowing that the question can\
115
+ \ be ambiguous, can you perform question disambiguation by generating a question\
116
+ \ such that \"{{selected_answer}}\" is a more suitable answer? If you deem that\
117
+ \ the question is not ambiguous, generate the same question given above.\n|||\n\
118
+ {{selected_question}}"
119
+ metadata: !TemplateMetadata
120
+ choices_in_prompt: false
121
+ languages:
122
+ - en
123
+ metrics:
124
+ - BLEU
125
+ - ROUGE
126
+ original_task: false
127
+ name: perform_question_disambiguation
128
+ reference: ''
promptsource/templates/anli/templates.yaml ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: anli
2
+ templates:
3
+ 0cc3ae39-3997-4686-8c93-5d51457efa1f: !Template
4
+ answer_choices: Correct ||| Inconclusive ||| Incorrect
5
+ id: 0cc3ae39-3997-4686-8c93-5d51457efa1f
6
+ jinja: '{{premise}} Using only the above description and what you know about the
7
+ world, "{{hypothesis}}" is definitely correct, incorrect, or inconclusive? |||
8
+ {{ answer_choices[label] }}'
9
+ metadata: !TemplateMetadata
10
+ choices_in_prompt: true
11
+ languages:
12
+ - en
13
+ metrics:
14
+ - Accuracy
15
+ original_task: true
16
+ name: MNLI crowdsource
17
+ reference: Adapted from Williams et al. 2018's instructions to crowdsourcing workers.
18
+ 179eb863-3ece-4e6f-af0f-fcb46d997306: !Template
19
+ answer_choices: Yes ||| Maybe ||| No
20
+ id: 179eb863-3ece-4e6f-af0f-fcb46d997306
21
+ jinja: 'Given {{premise}} Should we assume that "{{hypothesis}}" is true? Yes,
22
+ no, or maybe? ||| {{ answer_choices[label] }} '
23
+ metadata: !TemplateMetadata
24
+ choices_in_prompt: true
25
+ languages:
26
+ - en
27
+ metrics:
28
+ - Accuracy
29
+ original_task: true
30
+ name: should assume
31
+ reference: Webson & Pavlick 2021
32
+ 5459237b-97de-4340-bf7b-2939c3f7ca19: !Template
33
+ answer_choices: Yes ||| Maybe ||| No
34
+ id: 5459237b-97de-4340-bf7b-2939c3f7ca19
35
+ jinja: Given that {{premise}} Does it follow that {{hypothesis}} Yes, no, or maybe?
36
+ ||| {{ answer_choices[label] }}
37
+ metadata: !TemplateMetadata
38
+ choices_in_prompt: true
39
+ languages:
40
+ - en
41
+ metrics:
42
+ - Accuracy
43
+ original_task: true
44
+ name: does it follow that
45
+ reference: Sanh et al. 2021
46
+ 620aa3fc-d5eb-46f5-a1ee-4c754527aa97: !Template
47
+ answer_choices: True ||| Neither ||| False
48
+ id: 620aa3fc-d5eb-46f5-a1ee-4c754527aa97
49
+ jinja: '{{premise}}
50
+
51
+ Question: {{hypothesis}} True, False, or Neither? ||| {{ answer_choices[label]
52
+ }}'
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: true
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - Accuracy
59
+ original_task: true
60
+ name: GPT-3 style
61
+ reference: 'Same as reported in Figure G7 of the GPT-3 paper, except that there
62
+ is no task identifying tokens like "anli R1: ".'
63
+ 9b613182-c6ab-4427-9221-3d68f6d62765: !Template
64
+ answer_choices: Yes ||| Maybe ||| No
65
+ id: 9b613182-c6ab-4427-9221-3d68f6d62765
66
+ jinja: '{{premise}} Based on the previous passage, is it true that "{{hypothesis}}"?
67
+ Yes, no, or maybe? ||| {{ answer_choices[label] }}'
68
+ metadata: !TemplateMetadata
69
+ choices_in_prompt: true
70
+ languages:
71
+ - en
72
+ metrics:
73
+ - Accuracy
74
+ original_task: true
75
+ name: based on the previous passage
76
+ reference: "Adapted from the BoolQ prompts in Schick & Sch\xFCtze 2021."
77
+ a850110d-f1a3-49b4-949a-d3bfe9f81344: !Template
78
+ answer_choices: Yes ||| Maybe ||| No
79
+ id: a850110d-f1a3-49b4-949a-d3bfe9f81344
80
+ jinja: '{{premise}} Are we justified in saying that "{{hypothesis}}"? Yes, no,
81
+ or maybe? ||| {{ answer_choices[label] }} '
82
+ metadata: !TemplateMetadata
83
+ choices_in_prompt: true
84
+ languages:
85
+ - en
86
+ metrics:
87
+ - Accuracy
88
+ original_task: true
89
+ name: justified in saying
90
+ reference: Webson & Pavlick 2021
91
+ bab86d5a-4f9c-40db-b619-a7b7d5cae681: !Template
92
+ answer_choices: True ||| Inconclusive ||| False
93
+ id: bab86d5a-4f9c-40db-b619-a7b7d5cae681
94
+ jinja: 'Take the following as truth: {{premise}}
95
+
96
+ Then the following statement: "{{hypothesis}}" is {{"true"}}, {{"false"}}, or
97
+ {{"inconclusive"}}? ||| {{ answer_choices[label] }}'
98
+ metadata: !TemplateMetadata
99
+ choices_in_prompt: true
100
+ languages:
101
+ - en
102
+ metrics:
103
+ - Accuracy
104
+ original_task: true
105
+ name: take the following as truth
106
+ reference: Sanh et al. 2021
107
+ bcd90047-3a2b-426b-b065-8a418f1317b8: !Template
108
+ answer_choices: Yes ||| Maybe ||| No
109
+ id: bcd90047-3a2b-426b-b065-8a418f1317b8
110
+ jinja: 'Given that {{premise}} Therefore, it must be true that "{{hypothesis}}"?
111
+ Yes, no, or maybe? ||| {{ answer_choices[label] }} '
112
+ metadata: !TemplateMetadata
113
+ choices_in_prompt: true
114
+ languages:
115
+ - en
116
+ metrics:
117
+ - Accuracy
118
+ original_task: true
119
+ name: must be true
120
+ reference: Sanh et al. 2021
121
+ c4ed37ae-d7d7-4197-a725-ef2152fa3b1f: !Template
122
+ answer_choices: Yes ||| Maybe ||| No
123
+ id: c4ed37ae-d7d7-4197-a725-ef2152fa3b1f
124
+ jinja: 'Suppose {{premise}} Can we infer that "{{hypothesis}}"? Yes, no, or maybe?
125
+ ||| {{ answer_choices[label] }} '
126
+ metadata: !TemplateMetadata
127
+ choices_in_prompt: true
128
+ languages:
129
+ - en
130
+ metrics:
131
+ - Accuracy
132
+ original_task: true
133
+ name: can we infer
134
+ reference: Webson & Pavlick 2021
135
+ ca24b93a-6265-462f-b140-e329c03d94fa: !Template
136
+ answer_choices: Guaranteed ||| Possible ||| Impossible
137
+ id: ca24b93a-6265-462f-b140-e329c03d94fa
138
+ jinja: "Assume it is true that {{premise}} \n\nTherefore, \"{{hypothesis}}\" is\
139
+ \ {{\"guaranteed\"}}, {{\"possible\"}}, or {{\"impossible\"}}? ||| {{ answer_choices[label]\
140
+ \ }}"
141
+ metadata: !TemplateMetadata
142
+ choices_in_prompt: true
143
+ languages:
144
+ - en
145
+ metrics:
146
+ - Accuracy
147
+ original_task: true
148
+ name: guaranteed/possible/impossible
149
+ reference: Sanh et al. 2021
150
+ dbc68425-5c42-43ae-9748-70ce8c5a167e: !Template
151
+ answer_choices: Always ||| Sometimes ||| Never
152
+ id: dbc68425-5c42-43ae-9748-70ce8c5a167e
153
+ jinja: Suppose it's true that {{premise}} Then, is "{{hypothesis}}" {{"always"}},
154
+ {{"sometimes"}}, or {{"never"}} true? ||| {{ answer_choices[label] }}
155
+ metadata: !TemplateMetadata
156
+ choices_in_prompt: true
157
+ languages:
158
+ - en
159
+ metrics:
160
+ - Accuracy
161
+ original_task: true
162
+ name: always/sometimes/never
163
+ reference: Sanh et al. 2021
164
+ e5b7fdd7-fdff-4630-889b-3c7a052e5da0: !Template
165
+ answer_choices: Yes ||| Maybe ||| No
166
+ id: e5b7fdd7-fdff-4630-889b-3c7a052e5da0
167
+ jinja: "{{premise}} \n\nQuestion: Does this imply that \"{{hypothesis}}\"? Yes,\
168
+ \ no, or maybe? ||| {{answer_choices[label]}}"
169
+ metadata: !TemplateMetadata
170
+ choices_in_prompt: true
171
+ languages:
172
+ - en
173
+ metrics:
174
+ - Accuracy
175
+ original_task: true
176
+ name: does this imply
177
+ reference: Sanh et al. 2021
178
+ e6f32b9c-7e0b-474a-a0d2-e84d20c22aba: !Template
179
+ answer_choices: Always ||| Sometimes ||| Never
180
+ id: e6f32b9c-7e0b-474a-a0d2-e84d20c22aba
181
+ jinja: "{{premise}} \n\nKeeping in mind the above text, consider: {{hypothesis}}\
182
+ \ Is this {{\"always\"}}, {{\"sometimes\"}}, or {{\"never\"}} correct? ||| {{\
183
+ \ answer_choices[label] }}"
184
+ metadata: !TemplateMetadata
185
+ choices_in_prompt: true
186
+ languages:
187
+ - en
188
+ metrics:
189
+ - Accuracy
190
+ original_task: true
191
+ name: consider always/sometimes/never
192
+ reference: Sanh et al. 2021
193
+ ec249357-e672-4e7d-b8b6-d97ed7d090c5: !Template
194
+ answer_choices: True ||| Inconclusive ||| False
195
+ id: ec249357-e672-4e7d-b8b6-d97ed7d090c5
196
+ jinja: '{{premise}} Based on that information, is the claim: "{{hypothesis}}"
197
+ {{"true"}}, {{"false"}}, or {{"inconclusive"}}? ||| {{ answer_choices[label]
198
+ }}'
199
+ metadata: !TemplateMetadata
200
+ choices_in_prompt: true
201
+ languages:
202
+ - en
203
+ metrics:
204
+ - Accuracy
205
+ original_task: true
206
+ name: claim true/false/inconclusive
207
+ reference: Sanh et al. 2021
208
+ ffa0a6f0-7186-4ccb-bb35-8b1affb747a0: !Template
209
+ answer_choices: Yes ||| Maybe ||| No
210
+ id: ffa0a6f0-7186-4ccb-bb35-8b1affb747a0
211
+ jinja: 'Given {{premise}} Is it guaranteed true that "{{hypothesis}}"? Yes, no,
212
+ or maybe? ||| {{ answer_choices[label] }} '
213
+ metadata: !TemplateMetadata
214
+ choices_in_prompt: true
215
+ languages:
216
+ - en
217
+ metrics:
218
+ - Accuracy
219
+ original_task: true
220
+ name: guaranteed true
221
+ reference: Webson & Pavlick 2021
promptsource/templates/app_reviews/templates.yaml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: app_reviews
2
+ templates:
3
+ 2da8f134-58db-4f9d-b3b0-8c6b50693ab5: !Template
4
+ answer_choices: Not at all ||| No ||| Maybe ||| Yes ||| Definitely
5
+ id: 2da8f134-58db-4f9d-b3b0-8c6b50693ab5
6
+ jinja: 'Given this review: "{{review}}"
7
+
8
+ Would you recommend this app to a friend? {{answer_choices[0]}}, {{answer_choices[1]}},
9
+ {{answer_choices[2]}}, {{answer_choices[3]}}, or {{answer_choices[4]}}?
10
+
11
+ |||
12
+
13
+ {{answer_choices[star-1]}}'
14
+ metadata: !TemplateMetadata
15
+ choices_in_prompt: true
16
+ languages:
17
+ - en
18
+ metrics:
19
+ - Accuracy
20
+ - Spearman Correlation
21
+ original_task: false
22
+ name: categorize_rating_using_review
23
+ reference: Given the review, return a categorical answer.
24
+ 8086b434-a75e-45a4-87fb-4364601e2e05: !Template
25
+ answer_choices: null
26
+ id: 8086b434-a75e-45a4-87fb-4364601e2e05
27
+ jinja: 'Generate a {{star}}-star review (1 being lowest and 5 being highest) about
28
+ an app with package {{package_name}}.
29
+
30
+ |||
31
+
32
+ {{review}}'
33
+ metadata: !TemplateMetadata
34
+ choices_in_prompt: null
35
+ languages:
36
+ - en
37
+ metrics:
38
+ - Accuracy
39
+ - Spearman Correlation
40
+ original_task: false
41
+ name: generate_review
42
+ reference: Generate a review from the rating.
43
+ 9746ce4b-ac58-4dfb-9783-d77c95cb62cf: !Template
44
+ answer_choices: "\u2605 ||| \u2605\u2605 ||| \u2605\u2605\u2605 ||| \u2605\u2605\
45
+ \u2605\u2605 ||| \u2605\u2605\u2605\u2605\u2605"
46
+ id: 9746ce4b-ac58-4dfb-9783-d77c95cb62cf
47
+ jinja: "What would be the \u2605-rating of this review (\u2605 being the lowest\
48
+ \ and \u2605\u2605\u2605\u2605\u2605 being the highest)? \"{{review}}\"\n|||\n\
49
+ {{answer_choices[star-1]}}"
50
+ metadata: !TemplateMetadata
51
+ choices_in_prompt: false
52
+ languages:
53
+ - en
54
+ metrics:
55
+ - Accuracy
56
+ - Spearman Correlation
57
+ original_task: false
58
+ name: convert_to_star_rating
59
+ reference: Given the review, generate a star rating.
60
+ d34e1413-2699-4701-baa2-05d931d012ba: !Template
61
+ answer_choices: null
62
+ id: d34e1413-2699-4701-baa2-05d931d012ba
63
+ jinja: 'On a scale of 1-5 (with 1 being least favorable and 5 being most favorable),
64
+ how would you rate this review? "{{review}}"
65
+
66
+ |||
67
+
68
+ {{star}}'
69
+ metadata: !TemplateMetadata
70
+ choices_in_prompt: false
71
+ languages:
72
+ - en
73
+ metrics:
74
+ - Accuracy
75
+ - Spearman Correlation
76
+ original_task: false
77
+ name: convert_to_rating
78
+ reference: Convert review to rating
promptsource/templates/aqua_rat/raw/templates.yaml ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: aqua_rat
2
+ subset: raw
3
+ templates:
4
+ 13bd5099-33fa-4383-a441-33a7d2e1746f: !Template
5
+ answer_choices: A ||| B ||| C ||| D ||| E
6
+ id: 13bd5099-33fa-4383-a441-33a7d2e1746f
7
+ jinja: "Given the problem:\n{{question}}\n\nand the options:\n{% for i in range(options|length)\
8
+ \ %}\n{{options[i].replace(')', ') ')}}\n{% endfor %}\n\nThe correct answer\
9
+ \ is\n |||\n{{correct}}"
10
+ metadata: !TemplateMetadata
11
+ choices_in_prompt: true
12
+ languages:
13
+ - en
14
+ metrics:
15
+ - Accuracy
16
+ original_task: false
17
+ name: select_the_best_option
18
+ reference: ''
19
+ 58a6aa2b-ca26-473d-9bf8-385dd1a743cd: !Template
20
+ answer_choices: A ||| B ||| C ||| D ||| E
21
+ id: 58a6aa2b-ca26-473d-9bf8-385dd1a743cd
22
+ jinja: 'You will now be given a question and a set of options. Choose the correct
23
+ option and provide a rationale for the same.
24
+
25
+
26
+ Question:
27
+
28
+ {{question}}
29
+
30
+
31
+ Options:
32
+
33
+ {% for i in range(options|length) %}
34
+
35
+ - {{options[i].replace('')'', '') '')}}
36
+
37
+ {% endfor %}
38
+
39
+
40
+ |||
41
+
42
+ {{correct}}
43
+
44
+
45
+ {{rationale}}
46
+
47
+ '
48
+ metadata: !TemplateMetadata
49
+ choices_in_prompt: true
50
+ languages:
51
+ - en
52
+ metrics:
53
+ - Other
54
+ original_task: true
55
+ name: generate_rational_and_correct_choice
56
+ reference: ''
57
+ 5acfaa48-e1b6-44df-8e92-c58b94bff595: !Template
58
+ answer_choices: null
59
+ id: 5acfaa48-e1b6-44df-8e92-c58b94bff595
60
+ jinja: "Answer the given question by providing the correct rationale:\n\n{{question}}\n\
61
+ {% for i in range(options|length) %}\n {{options[i].replace(')', ') ')}}\n\
62
+ {%endfor%}\n|||\n{{rationale}}"
63
+ metadata: !TemplateMetadata
64
+ choices_in_prompt: true
65
+ languages:
66
+ - en
67
+ metrics:
68
+ - BLEU
69
+ - ROUGE
70
+ original_task: false
71
+ name: generate_rationale
72
+ reference: ''
73
+ 815acaf5-2e59-4f81-8190-ae75dc237cf1: !Template
74
+ answer_choices: A ||| B ||| C ||| D ||| E
75
+ id: 815acaf5-2e59-4f81-8190-ae75dc237cf1
76
+ jinja: '{{question}}
77
+
78
+
79
+ The above question was asked in a Math test. Given the following options, can
80
+ you choose the correct one?
81
+
82
+
83
+ {% for i in range(options|length) %}
84
+
85
+ - {{options[i].replace('')'', '') '')}}
86
+
87
+ {% endfor %}
88
+
89
+ |||
90
+
91
+ {{correct}}'
92
+ metadata: !TemplateMetadata
93
+ choices_in_prompt: true
94
+ languages:
95
+ - en
96
+ metrics:
97
+ - Accuracy
98
+ original_task: false
99
+ name: answer_quiz
100
+ reference: ''
101
+ c0403841-68b0-4c08-8c3b-a00a81272d05: !Template
102
+ answer_choices: A ||| B ||| C ||| D ||| E
103
+ id: c0403841-68b0-4c08-8c3b-a00a81272d05
104
+ jinja: "Solve the following question and choose the correct option.\n\n{{question}}\
105
+ \ \n{% for i in range(options|length) %}\n- {{options[i].replace(')', ') ')}}\n\
106
+ {%endfor%}\n||| \n{{correct}}\n\n"
107
+ metadata: !TemplateMetadata
108
+ choices_in_prompt: true
109
+ languages:
110
+ - en
111
+ metrics:
112
+ - Accuracy
113
+ original_task: false
114
+ name: Answer questions from options
115
+ reference: ''
116
+ c9352c6c-074b-4beb-8489-c151adeeedcb: !Template
117
+ answer_choices: null
118
+ id: c9352c6c-074b-4beb-8489-c151adeeedcb
119
+ jinja: "Question: \n{{question}}\n\nOptions: \n{% for i in range(options|length)\
120
+ \ %}\n- {{options[i].replace(')', ') ')}}\n{% endfor %}\n\nThis is how I solved\
121
+ \ the above question:\n|||\n{{rationale}}\n"
122
+ metadata: !TemplateMetadata
123
+ choices_in_prompt: true
124
+ languages:
125
+ - en
126
+ metrics:
127
+ - BLEU
128
+ - ROUGE
129
+ original_task: false
130
+ name: answer_question_with_rationale
131
+ reference: ''
promptsource/templates/art/templates.yaml ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: art
2
+ templates:
3
+ 151d0e97-d7d2-47f2-86b4-6777587b16f2: !Template
4
+ answer_choices: '{{hypothesis_1 | trim(''.?!'') }} ||| {{hypothesis_2 | trim(''.?!'')
5
+ }}'
6
+ id: 151d0e97-d7d2-47f2-86b4-6777587b16f2
7
+ jinja: "We know that:\n\n{{ observation_1 }},\n\nand:\n\n{{ observation_2 }} \n\
8
+ \nWhich one is more likely?\n\nThe first option: \n\n{{ answer_choices[0] }},\
9
+ \ \n\nor the second option:\n\n{{ answer_choices[1] }}?\n|||\n{{ answer_choices[label-1]\
10
+ \ }}"
11
+ metadata: !TemplateMetadata
12
+ choices_in_prompt: true
13
+ languages:
14
+ - en
15
+ metrics:
16
+ - Accuracy
17
+ original_task: true
18
+ name: choose_hypothesis_options
19
+ reference: ''
20
+ a090e019-1b98-4863-ab5d-ff9772f682d6: !Template
21
+ answer_choices: '{{hypothesis_1| trim(''.?!'') }} ||| {{hypothesis_2| trim(''.?!'')
22
+ }}'
23
+ id: a090e019-1b98-4863-ab5d-ff9772f682d6
24
+ jinja: 'You know the following:
25
+
26
+
27
+ {{ observation_1 }} {{ observation_2 }}
28
+
29
+
30
+ Which one is more believable?
31
+
32
+
33
+ - {{ answer_choices[0] }}
34
+
35
+ - {{ answer_choices[1] }}
36
+
37
+
38
+ |||
39
+
40
+
41
+ {{ answer_choices[label-1] }}'
42
+ metadata: !TemplateMetadata
43
+ choices_in_prompt: true
44
+ languages:
45
+ - en
46
+ metrics:
47
+ - Accuracy
48
+ original_task: true
49
+ name: choose_hypothesis_believable
50
+ reference: ''
51
+ bf8a5b8a-70cb-4b27-82db-8ca4fbd2318d: !Template
52
+ answer_choices: '{{hypothesis_1| trim(''.?!'') }} ||| {{hypothesis_2| trim(''.?!'')
53
+ }}'
54
+ id: bf8a5b8a-70cb-4b27-82db-8ca4fbd2318d
55
+ jinja: '{{ observation_1 }} {{ observation_2 }}
56
+
57
+
58
+ Would you rather believe that:
59
+
60
+
61
+ {{ answer_choices[0] }},
62
+
63
+
64
+ or:
65
+
66
+
67
+ {{ answer_choices[1] }}?
68
+
69
+ |||
70
+
71
+ {{ answer_choices[label-1] }}'
72
+ metadata: !TemplateMetadata
73
+ choices_in_prompt: true
74
+ languages:
75
+ - en
76
+ metrics:
77
+ - Accuracy
78
+ original_task: true
79
+ name: choose_hypothesis
80
+ reference: ''
81
+ d418b574-9d0a-4d29-a518-7d9a5f5a4a3d: !Template
82
+ answer_choices: '{{hypothesis_1| trim(''.?!'') }} ||| {{hypothesis_2| trim(''.?!'')
83
+ }}'
84
+ id: d418b574-9d0a-4d29-a518-7d9a5f5a4a3d
85
+ jinja: "Which of the following better fits the description?\n\nIs it that: \n\n\
86
+ {{ answer_choices[0] }},\n\nor rather: \n\n{{ answer_choices[1] }}?\n\nDescription:\
87
+ \ \n\n{{ observation_1 }} {{ observation_2 }}\n|||\n{{ answer_choices[label-1]\
88
+ \ }}"
89
+ metadata: !TemplateMetadata
90
+ choices_in_prompt: true
91
+ languages:
92
+ - en
93
+ metrics:
94
+ - Accuracy
95
+ original_task: true
96
+ name: choose_hypothesis_desc
97
+ reference: ''
98
+ eb0baa43-3c79-4d1d-973a-37e0055bbfec: !Template
99
+ answer_choices: '{{hypothesis_1| trim(''.?!'') }} ||| {{hypothesis_2| trim(''.?!'')
100
+ }}'
101
+ id: eb0baa43-3c79-4d1d-973a-37e0055bbfec
102
+ jinja: 'Which version is more likely?
103
+
104
+
105
+ The first one:
106
+
107
+
108
+ {{ answer_choices[0] }},
109
+
110
+
111
+ or the second one:
112
+
113
+
114
+ {{ answer_choices[1] }}?
115
+
116
+
117
+ Assuming that:
118
+
119
+
120
+ {{ observation_1 }} {{ observation_2 }}
121
+
122
+ |||
123
+
124
+ {{ answer_choices[label-1] }}'
125
+ metadata: !TemplateMetadata
126
+ choices_in_prompt: true
127
+ languages:
128
+ - en
129
+ metrics:
130
+ - Accuracy
131
+ original_task: true
132
+ name: choose_hypothesis_likely
133
+ reference: ''
promptsource/templates/asnq/templates.yaml ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: asnq
2
+ templates:
3
+ 0e06d340-6d2c-44f7-b977-604925773f0b: !Template
4
+ answer_choices: No ||| Yes
5
+ id: 0e06d340-6d2c-44f7-b977-604925773f0b
6
+ jinja: "Question: {{question}} \nSentence: {{sentence}} \nAre the question and\
7
+ \ the sentence positive pairs where positive pairs means that the sentence answers\
8
+ \ the question? ||| {{answer_choices[label]}}"
9
+ metadata: !TemplateMetadata
10
+ choices_in_prompt: false
11
+ languages:
12
+ - en
13
+ metrics:
14
+ - Accuracy
15
+ original_task: true
16
+ name: positive_pairs
17
+ reference: ''
18
+ 55f386ba-9a86-405e-a805-152e254a4205: !Template
19
+ answer_choices: null
20
+ id: 55f386ba-9a86-405e-a805-152e254a4205
21
+ jinja: "{% if label == 1 %}\n\nWhat is a question that someone might ask that\
22
+ \ the following sentence can answer?\n\n {{sentence}}\n\n|||\n\n{{question}}\n\
23
+ {% endif %}\n"
24
+ metadata: !TemplateMetadata
25
+ choices_in_prompt: false
26
+ languages:
27
+ - en
28
+ metrics:
29
+ - BLEU
30
+ - ROUGE
31
+ original_task: false
32
+ name: question_from_sentence
33
+ reference: ''
34
+ 5b6abb0a-1b4f-4338-aab6-430465669164: !Template
35
+ answer_choices: null
36
+ id: 5b6abb0a-1b4f-4338-aab6-430465669164
37
+ jinja: '{% if label == 1 %}
38
+
39
+
40
+ Write a question based on this sentence: {{sentence}}
41
+
42
+
43
+ |||
44
+
45
+
46
+ {{question}}
47
+
48
+ {% endif %}
49
+
50
+ '
51
+ metadata: !TemplateMetadata
52
+ choices_in_prompt: false
53
+ languages:
54
+ - en
55
+ metrics:
56
+ - BLEU
57
+ - ROUGE
58
+ original_task: false
59
+ name: write_question
60
+ reference: ''
61
+ 684aea91-34c4-47de-a61f-7cc9a182b657: !Template
62
+ answer_choices: No ||| Yes
63
+ id: 684aea91-34c4-47de-a61f-7cc9a182b657
64
+ jinja: Can the answer "{{sentence}}" be inferred from the question "{{question}}"
65
+ ? ||| {{answer_choices[label]}}
66
+ metadata: !TemplateMetadata
67
+ choices_in_prompt: false
68
+ languages:
69
+ - en
70
+ metrics:
71
+ - Accuracy
72
+ original_task: true
73
+ name: answer_infer_question
74
+ reference: ''
75
+ 719306b9-5dc8-46c7-b693-9b2edc2e09f2: !Template
76
+ answer_choices: No ||| Yes
77
+ id: 719306b9-5dc8-46c7-b693-9b2edc2e09f2
78
+ jinja: Does this sentence "{{sentence}}" answer this question "{{question}}"
79
+ ? ||| {{answer_choices[label]}}
80
+ metadata: !TemplateMetadata
81
+ choices_in_prompt: false
82
+ languages:
83
+ - en
84
+ metrics:
85
+ - Accuracy
86
+ original_task: true
87
+ name: Does_sentence_answer_question
88
+ reference: ''
89
+ 859ec580-957b-42da-be1b-c3ccb8b52d24: !Template
90
+ answer_choices: null
91
+ id: 859ec580-957b-42da-be1b-c3ccb8b52d24
92
+ jinja: '{% if label == 1 %}
93
+
94
+
95
+ Generate a one-sentence answer to the following question: {{question}}?
96
+
97
+
98
+ |||
99
+
100
+
101
+ {{sentence}}
102
+
103
+ {% endif %}
104
+
105
+ '
106
+ metadata: !TemplateMetadata
107
+ choices_in_prompt: false
108
+ languages:
109
+ - en
110
+ metrics:
111
+ - BLEU
112
+ - ROUGE
113
+ original_task: false
114
+ name: answer question with a sentence
115
+ reference: ''
116
+ 85da6666-9e50-4122-84c8-d00b90967475: !Template
117
+ answer_choices: null
118
+ id: 85da6666-9e50-4122-84c8-d00b90967475
119
+ jinja: '{% if label == 1 %}
120
+
121
+
122
+ Given the following question: {{question}}? Can you give me a full sentence
123
+ answer?
124
+
125
+
126
+ |||
127
+
128
+
129
+ {{sentence}}
130
+
131
+ {% endif %}
132
+
133
+ '
134
+ metadata: !TemplateMetadata
135
+ choices_in_prompt: false
136
+ languages:
137
+ - en
138
+ metrics:
139
+ - BLEU
140
+ - ROUGE
141
+ original_task: false
142
+ name: give me a full sentence answer
143
+ reference: ''
144
+ 85fe8aaa-83c5-41ec-ada5-0e6d60bab1f9: !Template
145
+ answer_choices: null
146
+ id: 85fe8aaa-83c5-41ec-ada5-0e6d60bab1f9
147
+ jinja: '{% if label == 1 %}
148
+
149
+ Answer this question as a full sentence: {{question}}?
150
+
151
+ |||
152
+
153
+ {{sentence}}
154
+
155
+ {% endif %}
156
+
157
+ '
158
+ metadata: !TemplateMetadata
159
+ choices_in_prompt: false
160
+ languages:
161
+ - en
162
+ metrics:
163
+ - BLEU
164
+ - ROUGE
165
+ original_task: false
166
+ name: answer question as a sentence
167
+ reference: ''
168
+ 95e39e1d-a830-4b6c-bd2a-10fe51552427: !Template
169
+ answer_choices: No ||| Yes
170
+ id: 95e39e1d-a830-4b6c-bd2a-10fe51552427
171
+ jinja: 'Can this question: "{{question}}" be answered as follow: "{{sentence}}"
172
+ \ \ please answer yes or no. ||| {{answer_choices[label]}}'
173
+ metadata: !TemplateMetadata
174
+ choices_in_prompt: true
175
+ languages:
176
+ - en
177
+ metrics:
178
+ - Accuracy
179
+ original_task: true
180
+ name: yes_vs_no
181
+ reference: ''
182
+ a36d6152-72c4-4278-8266-d27b28667f61: !Template
183
+ answer_choices: null
184
+ id: a36d6152-72c4-4278-8266-d27b28667f61
185
+ jinja: "{% if label == 1 %}\n\nHere is a sentence:\n\n {{sentence}}\n\nWrite a\
186
+ \ question to which this sentence is an answer.\n\n|||\n\n{{question}}\n{% endif\
187
+ \ %}\n"
188
+ metadata: !TemplateMetadata
189
+ choices_in_prompt: false
190
+ languages:
191
+ - en
192
+ metrics:
193
+ - BLEU
194
+ - ROUGE
195
+ original_task: false
196
+ name: write_a_question
197
+ reference: ''
198
+ a7927e90-1a9b-49e2-a2f8-5ac9e6d286cb: !Template
199
+ answer_choices: No ||| Yes
200
+ id: a7927e90-1a9b-49e2-a2f8-5ac9e6d286cb
201
+ jinja: 'Does the following sentence "{{sentence}}" seem a right answer for the
202
+ following question : {{question}} ||| {{answer_choices[label]}}'
203
+ metadata: !TemplateMetadata
204
+ choices_in_prompt: false
205
+ languages:
206
+ - en
207
+ metrics:
208
+ - Accuracy
209
+ original_task: true
210
+ name: right_answer
211
+ reference: ''
promptsource/templates/asset/ratings/templates.yaml ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: asset
2
+ subset: ratings
3
+ templates:
4
+ 09b2a13b-cba6-4473-8a46-3fa24be71ce2: !Template
5
+ answer_choices: No ||| Yes
6
+ id: 09b2a13b-cba6-4473-8a46-3fa24be71ce2
7
+ jinja: "{% set label = None %}\n{% set questions = None %}\n{% if rating > 50\
8
+ \ %}\n{% set label = 1 %}\n{% else %}\n{% set label = 0 %}\n{% endif %}\n{%\
9
+ \ set questions= [ \"Does the second sentence better convey the information?\"\
10
+ , \"Is the second sentence more fluent?\", \"Is the second sentence simpler?\"\
11
+ ] %}\n\nFirst sentence: {{original}}\n\nSecond sentence: {{simplification}}\n\
12
+ \n{{questions[aspect]}}. Please answer Yes or No. \n|||\n{{answer_choices[label]}}\n"
13
+ metadata: !TemplateMetadata
14
+ choices_in_prompt: true
15
+ languages:
16
+ - en
17
+ metrics:
18
+ - Accuracy
19
+ original_task: true
20
+ name: rate-binary
21
+ reference: Taking questions from the original paper, we use rating to establish
22
+ a binary classification problem
23
+ 47142040-4121-4144-98b9-61cb5cbb1313: !Template
24
+ answer_choices: null
25
+ id: 47142040-4121-4144-98b9-61cb5cbb1313
26
+ jinja: 'First sentence: {{original}}
27
+
28
+
29
+ Second sentence: {{simplification}}
30
+
31
+
32
+ I am scoring these simplification exercises. How easier to read is the second
33
+ sentence on a scale from 0 (harder to read) to 100 (easier to read)?
34
+
35
+
36
+ |||
37
+
38
+
39
+ {{rating}}'
40
+ metadata: !TemplateMetadata
41
+ choices_in_prompt: false
42
+ languages:
43
+ - en
44
+ metrics:
45
+ - Other
46
+ original_task: true
47
+ name: rate-regression-simplicity
48
+ reference: Prompt model to rate how simplified the sentence is in the general
49
+ sense, instead of an particular aspect. This is a regression task whose range
50
+ is from 0 to 100.
51
+ 7dd6e8b6-eae0-40c5-aa5e-1cc24357d85d: !Template
52
+ answer_choices: null
53
+ id: 7dd6e8b6-eae0-40c5-aa5e-1cc24357d85d
54
+ jinja: '{% set label = None %}
55
+
56
+ {% set questions = None %}
57
+
58
+ {% if rating > 50 %}
59
+
60
+ {% set label = 1 %}
61
+
62
+ {% else %}
63
+
64
+ {% set label = 0 %}
65
+
66
+ {% endif %}
67
+
68
+ {% if label == 1 %}
69
+
70
+ {% set questions= [ "Rewrite the following sentence so that it conveys the information
71
+ better.", "Rewrite the following sentence so that it is more fluent.", "Rewrite
72
+ the following sentence so that it is simpler."] %}
73
+
74
+ {% else %}
75
+
76
+ {% set questions= [ "Rewrite the following sentence so that it conveys the information
77
+ more poorly.", "Rewrite the following sentence so that it is less fluent.",
78
+ "Rewrite the following sentence so that it is more complicated."] %}
79
+
80
+ {% endif %}
81
+
82
+ {{questions[aspect]}}
83
+
84
+
85
+ {{original}}
86
+
87
+ |||
88
+
89
+ {{simplification}}
90
+
91
+ '
92
+ metadata: !TemplateMetadata
93
+ choices_in_prompt: false
94
+ languages:
95
+ - en
96
+ metrics:
97
+ - BLEU
98
+ - ROUGE
99
+ original_task: false
100
+ name: generate-text-based-on-rating
101
+ reference: ''
102
+ d2bed959-29ab-4962-a106-dc91c00f3f03: !Template
103
+ answer_choices: null
104
+ id: d2bed959-29ab-4962-a106-dc91c00f3f03
105
+ jinja: "{% set statements= [ \"the second sentence expresses the underlying meaning\
106
+ \ the best.\", \"the second sentence is more fluent.\", \"the second sentence\
107
+ \ is simpler.\"] %}\n\nFirst sentence: {{original}}\n\nSecond sentence: {{simplification}}\n\
108
+ \nRate the following statement from 0 (strongly disagree) to 100 (strongly agree):\
109
+ \ {{statements[aspect]}} \n\n|||\n{{rating}}"
110
+ metadata: !TemplateMetadata
111
+ choices_in_prompt: false
112
+ languages:
113
+ - en
114
+ metrics:
115
+ - Other
116
+ original_task: true
117
+ name: rate-regression
118
+ reference: Require the model to output the rating. This is a regression task whose
119
+ range is from 0 to 100.
promptsource/templates/asset/simplification/templates.yaml ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: asset
2
+ subset: simplification
3
+ templates:
4
+ 0f0e55f9-28b4-4844-b65d-b9544a0918eb: !Template
5
+ answer_choices: null
6
+ id: 0f0e55f9-28b4-4844-b65d-b9544a0918eb
7
+ jinja: "{% set real_simplifications = [] %}{% for text in simplifications %}{%\
8
+ \ if text|length < original|length %}{{real_simplifications.append(text) | default(\"\
9
+ \", True)}}{% endif %}{% endfor %}\n{% if real_simplifications %}\nText: {{original}}\n\
10
+ \nHow would I simplify this? \n\n|||\n\n{{real_simplifications | choice}}\n\
11
+ {% endif %}"
12
+ metadata: !TemplateMetadata
13
+ choices_in_prompt: false
14
+ languages:
15
+ - en
16
+ metrics:
17
+ - BLEU
18
+ - ROUGE
19
+ original_task: true
20
+ name: verbose-to-simplification
21
+ reference: Rewrite text using one random simplification
22
+ 3cbfbc1c-6876-4dd7-b7db-45fb3233a667: !Template
23
+ answer_choices: null
24
+ id: 3cbfbc1c-6876-4dd7-b7db-45fb3233a667
25
+ jinja: '{% set real_simplifications = [] %}{% for text in simplifications %}{%
26
+ if text|length < original|length %}{{real_simplifications.append(text) | default("",
27
+ True)}}{% endif %}{% endfor %}
28
+
29
+ {% if real_simplifications %}
30
+
31
+ Make the below sentence more verbose:
32
+
33
+
34
+ {{real_simplifications | choice}}
35
+
36
+
37
+ |||
38
+
39
+
40
+ {{original}}
41
+
42
+ {% endif %}'
43
+ metadata: !TemplateMetadata
44
+ choices_in_prompt: false
45
+ languages:
46
+ - en
47
+ metrics:
48
+ - BLEU
49
+ - ROUGE
50
+ original_task: false
51
+ name: simplification-to-verbose
52
+ reference: Make the simplified text more verbose
53
+ 41d32553-433c-44fb-9eda-0fce51bf9e14: !Template
54
+ answer_choices: A ||| B
55
+ id: 41d32553-433c-44fb-9eda-0fce51bf9e14
56
+ jinja: '{% set rand_num = range(0,2) | choice %}
57
+
58
+ {% set real_simplifications = [] %}{% for text in simplifications %}{% if text|length
59
+ < original|length %}{{real_simplifications.append(text) | default("", True)}}{%
60
+ endif %}{% endfor %}
61
+
62
+ {% if real_simplifications %}
63
+
64
+ One of the following two sentences is more verbose than the other. Which one
65
+ is it?
66
+
67
+ {% if rand_num %}
68
+
69
+ A: {{real_simplifications | choice}}
70
+
71
+
72
+ B: {{original}}
73
+
74
+ {% else %}
75
+
76
+ A: {{original}}
77
+
78
+
79
+ B: {{real_simplifications | choice}}
80
+
81
+ {% endif %}
82
+
83
+ |||
84
+
85
+ {{ answer_choices[rand_num] }}
86
+
87
+ {% endif %}'
88
+ metadata: !TemplateMetadata
89
+ choices_in_prompt: true
90
+ languages:
91
+ - en
92
+ metrics:
93
+ - Accuracy
94
+ original_task: false
95
+ name: choose-verbose
96
+ reference: ''
97
+ 5c2f56b9-5bd4-4455-9d68-0729bfdb9c84: !Template
98
+ answer_choices: A ||| B
99
+ id: 5c2f56b9-5bd4-4455-9d68-0729bfdb9c84
100
+ jinja: '{% set rand_num = range(0,2) | choice %}
101
+
102
+ {% set real_simplifications = [] %}{% for text in simplifications %}{% if text|length
103
+ < original|length %}{{real_simplifications.append(text) | default("", True)}}{%
104
+ endif %}{% endfor %}
105
+
106
+ {% if real_simplifications %}
107
+
108
+ One of the following two sentences is more simple than the other. Which one
109
+ is it?
110
+
111
+ {% if rand_num %}
112
+
113
+ A: {{real_simplifications | choice}}
114
+
115
+
116
+ B: {{original}}
117
+
118
+ {% else %}
119
+
120
+ A: {{original}}
121
+
122
+
123
+ B: {{real_simplifications | choice}}
124
+
125
+ {% endif %}
126
+
127
+ |||
128
+
129
+ {{ answer_choices[1-rand_num] }}
130
+
131
+ {% endif %}'
132
+ metadata: !TemplateMetadata
133
+ choices_in_prompt: true
134
+ languages:
135
+ - en
136
+ metrics:
137
+ - Accuracy
138
+ original_task: true
139
+ name: choose-simplification
140
+ reference: ''
141
+ d528d74b-bbc2-4888-ae21-db0ab37304df: !Template
142
+ answer_choices: null
143
+ id: d528d74b-bbc2-4888-ae21-db0ab37304df
144
+ jinja: '{% set real_simplifications = [] %}{% for text in simplifications %}{%
145
+ if text|length < original|length %}{{real_simplifications.append(text) | default("",
146
+ True)}}{% endif %}{% endfor %}
147
+
148
+ {% if real_simplifications %}
149
+
150
+ I''d like to explain to my child "{{original}}". How would I do so?
151
+
152
+
153
+ |||
154
+
155
+
156
+ {{real_simplifications | choice}}
157
+
158
+ {% endif %}'
159
+ metadata: !TemplateMetadata
160
+ choices_in_prompt: false
161
+ languages:
162
+ - en
163
+ metrics:
164
+ - BLEU
165
+ - ROUGE
166
+ original_task: true
167
+ name: verbose-to-simplification-implicit
168
+ reference: Implicit simplification request
promptsource/templates/banking77/templates.yaml ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: banking77
2
+ templates:
3
+ 0dba8abc-248a-44db-bb86-20492ffc17f6: !Template
4
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
5
+ top up|||balance not updated after bank transfer|||balance not updated after
6
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
7
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
8
+ not working|||card payment fee charged|||card payment not recognised|||card
9
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
10
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
11
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
12
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
13
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
14
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
15
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
16
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
17
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
18
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
19
+ card payment?|||supported cards and currencies|||terminate account|||top up
20
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
21
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
22
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
23
+ received by recipient|||transfer timing|||unable to verify identity|||verify
24
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
25
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
26
+ exchange rate for cash withdrawal
27
+ id: 0dba8abc-248a-44db-bb86-20492ffc17f6
28
+ jinja: 'Which help page can be provided to provide information regarding this
29
+ query?
30
+
31
+ Query: {{text}} |||
32
+
33
+ {{answer_choices[label]}}'
34
+ metadata: !TemplateMetadata
35
+ choices_in_prompt: false
36
+ languages:
37
+ - en
38
+ metrics:
39
+ - Accuracy
40
+ original_task: true
41
+ name: help_page_topic
42
+ reference: ''
43
+ 2520f6d0-fcdf-44b6-abb3-a76e44948047: !Template
44
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
45
+ top up|||balance not updated after bank transfer|||balance not updated after
46
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
47
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
48
+ not working|||card payment fee charged|||card payment not recognised|||card
49
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
50
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
51
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
52
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
53
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
54
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
55
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
56
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
57
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
58
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
59
+ card payment?|||supported cards and currencies|||terminate account|||top up
60
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
61
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
62
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
63
+ received by recipient|||transfer timing|||unable to verify identity|||verify
64
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
65
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
66
+ exchange rate for cash withdrawal
67
+ id: 2520f6d0-fcdf-44b6-abb3-a76e44948047
68
+ jinja: 'To which department in the bank can this query be directed?
69
+
70
+ Query: {{text}}
71
+
72
+ ||| {{answer_choices[label]}}'
73
+ metadata: !TemplateMetadata
74
+ choices_in_prompt: false
75
+ languages:
76
+ - en
77
+ metrics:
78
+ - Accuracy
79
+ original_task: true
80
+ name: direct_to_which_department
81
+ reference: ''
82
+ 9482bce0-f201-451b-9384-af588d707629: !Template
83
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
84
+ top up|||balance not updated after bank transfer|||balance not updated after
85
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
86
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
87
+ not working|||card payment fee charged|||card payment not recognised|||card
88
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
89
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
90
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
91
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
92
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
93
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
94
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
95
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
96
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
97
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
98
+ card payment?|||supported cards and currencies|||terminate account|||top up
99
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
100
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
101
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
102
+ received by recipient|||transfer timing|||unable to verify identity|||verify
103
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
104
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
105
+ exchange rate for cash withdrawal
106
+ id: 9482bce0-f201-451b-9384-af588d707629
107
+ jinja: 'To which of the following departments in the bank can the given query
108
+ be directed?
109
+
110
+ Query: {{text}} Departments:
111
+
112
+ {% for intent in answer_choices %}
113
+
114
+ - {{intent}} {% endfor %}
115
+
116
+ |||
117
+
118
+ {{answer_choices[label]}}'
119
+ metadata: !TemplateMetadata
120
+ choices_in_prompt: true
121
+ languages:
122
+ - en
123
+ metrics:
124
+ - Accuracy
125
+ original_task: true
126
+ name: choose the correct department
127
+ reference: ''
128
+ e629d77c-46f9-4e00-b23a-c522d07a9943: !Template
129
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
130
+ top up|||balance not updated after bank transfer|||balance not updated after
131
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
132
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
133
+ not working|||card payment fee charged|||card payment not recognised|||card
134
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
135
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
136
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
137
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
138
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
139
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
140
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
141
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
142
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
143
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
144
+ card payment?|||supported cards and currencies|||terminate account|||top up
145
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
146
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
147
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
148
+ received by recipient|||transfer timing|||unable to verify identity|||verify
149
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
150
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
151
+ exchange rate for cash withdrawal
152
+ id: e629d77c-46f9-4e00-b23a-c522d07a9943
153
+ jinja: "Summarise the following query in the form of key banking terms: \n{{text}}\n\
154
+ |||\n{{answer_choices[label]}}"
155
+ metadata: !TemplateMetadata
156
+ choices_in_prompt: false
157
+ languages:
158
+ - en
159
+ metrics:
160
+ - Accuracy
161
+ original_task: true
162
+ name: rephrase_as_banking_term
163
+ reference: ''
164
+ edd67883-0386-4496-af7f-37a44c41293f: !Template
165
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
166
+ top up|||balance not updated after bank transfer|||balance not updated after
167
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
168
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
169
+ not working|||card payment fee charged|||card payment not recognised|||card
170
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
171
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
172
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
173
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
174
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
175
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
176
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
177
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
178
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
179
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
180
+ card payment?|||supported cards and currencies|||terminate account|||top up
181
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
182
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
183
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
184
+ received by recipient|||transfer timing|||unable to verify identity|||verify
185
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
186
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
187
+ exchange rate for cash withdrawal
188
+ id: edd67883-0386-4496-af7f-37a44c41293f
189
+ jinja: 'Which of the following intents best represents this banking query?
190
+
191
+ Text: {{text}}
192
+
193
+ Intents:
194
+
195
+ {% for intent in answer_choices %}
196
+
197
+ - {{intent}} {% endfor %}
198
+
199
+ |||
200
+
201
+ {{answer_choices[label]}}'
202
+ metadata: !TemplateMetadata
203
+ choices_in_prompt: true
204
+ languages:
205
+ - en
206
+ metrics:
207
+ - Accuracy
208
+ original_task: true
209
+ name: choose_the_correct_intent
210
+ reference: ''
211
+ eee2366a-8f0c-4ac3-b9cc-aa038e40f8cb: !Template
212
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
213
+ top up|||balance not updated after bank transfer|||balance not updated after
214
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
215
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
216
+ not working|||card payment fee charged|||card payment not recognised|||card
217
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
218
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
219
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
220
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
221
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
222
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
223
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
224
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
225
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
226
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
227
+ card payment?|||supported cards and currencies|||terminate account|||top up
228
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
229
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
230
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
231
+ received by recipient|||transfer timing|||unable to verify identity|||verify
232
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
233
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
234
+ exchange rate for cash withdrawal
235
+ id: eee2366a-8f0c-4ac3-b9cc-aa038e40f8cb
236
+ jinja: 'What is the intent of this banking query?
237
+
238
+ {{text}} |||
239
+
240
+ {{answer_choices[label]}}'
241
+ metadata: !TemplateMetadata
242
+ choices_in_prompt: false
243
+ languages:
244
+ - en
245
+ metrics:
246
+ - Accuracy
247
+ original_task: true
248
+ name: what_is_intent
249
+ reference: ''
250
+ f4e80455-1523-4b91-aacc-249d8c6f0f2a: !Template
251
+ answer_choices: activate my card|||age limit|||apple pay or google pay|||atm support|||automatic
252
+ top up|||balance not updated after bank transfer|||balance not updated after
253
+ cheque or cash deposit|||beneficiary not allowed|||cancel transfer|||card about
254
+ to expire|||card acceptance|||card arrival|||card delivery estimate|||card linking|||card
255
+ not working|||card payment fee charged|||card payment not recognised|||card
256
+ payment wrong exchange rate|||card swallowed|||cash withdrawal charge|||cash
257
+ withdrawal not recognised|||change pin|||compromised card|||contactless not
258
+ working|||country support|||declined card payment|||declined cash withdrawal|||declined
259
+ transfer|||direct debit payment not recognised|||disposable card limits|||edit
260
+ personal details|||exchange charge|||exchange rate|||exchange via app|||extra
261
+ charge on statement|||failed transfer|||fiat currency support|||get disposable
262
+ virtual card|||get physical card|||getting spare card|||getting virtual card|||lost
263
+ or stolen card|||lost or stolen phone|||order physical card|||passcode forgotten|||pending
264
+ card payment|||pending cash withdrawal|||pending top up|||pending transfer|||pin
265
+ blocked|||receiving money|||Refund not showing up|||request refund|||reverted
266
+ card payment?|||supported cards and currencies|||terminate account|||top up
267
+ by bank transfer charge|||top up by card charge|||top up by cash or cheque|||top
268
+ up failed|||top up limits|||top up reverted|||topping up by card|||transaction
269
+ charged twice|||transfer fee charged|||transfer into account|||transfer not
270
+ received by recipient|||transfer timing|||unable to verify identity|||verify
271
+ my identity|||verify source of funds|||verify top up|||virtual card not working|||visa
272
+ or mastercard|||why verify identity|||wrong amount of cash received|||wrong
273
+ exchange rate for cash withdrawal
274
+ id: f4e80455-1523-4b91-aacc-249d8c6f0f2a
275
+ jinja: 'Generate the subject for an email containing the following text:
276
+
277
+ {{text}} |||
278
+
279
+ {{answer_choices[label]}}'
280
+ metadata: !TemplateMetadata
281
+ choices_in_prompt: false
282
+ languages:
283
+ - en
284
+ metrics:
285
+ - Accuracy
286
+ original_task: true
287
+ name: generate_subject_for_text
288
+ reference: ''
promptsource/templates/billsum/templates.yaml ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: billsum
2
+ templates:
3
+ 0938c6e4-dbaf-43d8-8d8f-4bc62489ae74: !Template
4
+ answer_choices: null
5
+ id: 0938c6e4-dbaf-43d8-8d8f-4bc62489ae74
6
+ jinja: 'Given the title: "{{title}}" and the summary of a bill: {{summary}}.
7
+
8
+ Write this bill based on the title and summary.
9
+
10
+ |||
11
+
12
+ {{text}}'
13
+ metadata: !TemplateMetadata
14
+ choices_in_prompt: false
15
+ languages:
16
+ - en
17
+ metrics:
18
+ - BLEU
19
+ - ROUGE
20
+ original_task: false
21
+ name: 'Write a bill: (title, summary->text)'
22
+ reference: ''
23
+ 3c790ac3-0557-47a9-9b71-1cb435f15629: !Template
24
+ answer_choices: null
25
+ id: 3c790ac3-0557-47a9-9b71-1cb435f15629
26
+ jinja: "Given a state bill: {{text}}. \nPlease write the title of this bill in\
27
+ \ one sentence.\n|||\n{{title}}"
28
+ metadata: !TemplateMetadata
29
+ choices_in_prompt: false
30
+ languages:
31
+ - en
32
+ metrics:
33
+ - BLEU
34
+ - ROUGE
35
+ original_task: false
36
+ name: 'Summarize this bill in one sentence: (text-> title)'
37
+ reference: ''
38
+ 438192e5-d67a-4098-9d82-a9fe892f6be2: !Template
39
+ answer_choices: null
40
+ id: 438192e5-d67a-4098-9d82-a9fe892f6be2
41
+ jinja: 'Given a summary of a bill: {{summary}}.
42
+
43
+ Write this bill.
44
+
45
+ |||
46
+
47
+ {{text}}'
48
+ metadata: !TemplateMetadata
49
+ choices_in_prompt: false
50
+ languages:
51
+ - en
52
+ metrics:
53
+ - BLEU
54
+ - ROUGE
55
+ original_task: false
56
+ name: 'Write a bill: (summary-> text)'
57
+ reference: ''
58
+ 4891a8e7-258c-41e2-80d3-0c1a054acb07: !Template
59
+ answer_choices: null
60
+ id: 4891a8e7-258c-41e2-80d3-0c1a054acb07
61
+ jinja: 'Given a title: "{{title}}" of a bill.
62
+
63
+ Write this bill based on this title.
64
+
65
+ |||
66
+
67
+ {{text}}'
68
+ metadata: !TemplateMetadata
69
+ choices_in_prompt: false
70
+ languages:
71
+ - en
72
+ metrics:
73
+ - BLEU
74
+ - ROUGE
75
+ original_task: false
76
+ name: 'Write a bill: (title-> text)'
77
+ reference: ''
78
+ 550fa161-af4e-4430-9844-ce7dad587733: !Template
79
+ answer_choices: null
80
+ id: 550fa161-af4e-4430-9844-ce7dad587733
81
+ jinja: 'Given this bill: {{text}}.
82
+
83
+ Write a summary of this bill.
84
+
85
+ |||
86
+
87
+ {{summary}}'
88
+ metadata: !TemplateMetadata
89
+ choices_in_prompt: false
90
+ languages:
91
+ - en
92
+ metrics:
93
+ - BLEU
94
+ - ROUGE
95
+ original_task: true
96
+ name: 'Summarize this bill: (text-> summary)'
97
+ reference: ''
98
+ 5d2404b9-63ff-406e-977d-eda6afb5c689: !Template
99
+ answer_choices: null
100
+ id: 5d2404b9-63ff-406e-977d-eda6afb5c689
101
+ jinja: 'Given a summary: {{summary}}, we want to generate a title from this summary.
102
+
103
+ |||
104
+
105
+ {{title}}'
106
+ metadata: !TemplateMetadata
107
+ choices_in_prompt: false
108
+ languages:
109
+ - en
110
+ metrics:
111
+ - BLEU
112
+ - ROUGE
113
+ original_task: false
114
+ name: Generate title from summary
115
+ reference: ''
116
+ 6a439a80-4924-49e9-b5ae-f661683b399f: !Template
117
+ answer_choices: null
118
+ id: 6a439a80-4924-49e9-b5ae-f661683b399f
119
+ jinja: 'Summarize this US bill: {{text}}
120
+
121
+ |||
122
+
123
+ {{summary}}'
124
+ metadata: !TemplateMetadata
125
+ choices_in_prompt: false
126
+ languages:
127
+ - en
128
+ metrics:
129
+ - BLEU
130
+ - ROUGE
131
+ original_task: true
132
+ name: 'Summarize: (text -> summary )'
133
+ reference: ''
134
+ ea9f0376-6cec-450c-b258-89f479cb9f6d: !Template
135
+ answer_choices: null
136
+ id: ea9f0376-6cec-450c-b258-89f479cb9f6d
137
+ jinja: 'Given a summary of a bill: {{summary}}.
138
+
139
+ Please write the title of this summary.
140
+
141
+ |||
142
+
143
+ {{title}}'
144
+ metadata: !TemplateMetadata
145
+ choices_in_prompt: false
146
+ languages:
147
+ - en
148
+ metrics:
149
+ - BLEU
150
+ - ROUGE
151
+ original_task: false
152
+ name: 'Summarize: (summary -> title)'
153
+ reference: ''
promptsource/templates/bing_coronavirus_query_set/templates.yaml ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: bing_coronavirus_query_set
2
+ templates:
3
+ 43332782-9e92-4bb2-94bf-28759f3fe181: !Template
4
+ answer_choices: null
5
+ id: 43332782-9e92-4bb2-94bf-28759f3fe181
6
+ jinja: "This search query talks about the coronavirus and was published on {{Date}}.\
7
+ \ In what country was it issued? \n{{Query}}\n|||\n{{Country}}"
8
+ metadata: !TemplateMetadata
9
+ choices_in_prompt: false
10
+ languages:
11
+ - en
12
+ metrics:
13
+ - Accuracy
14
+ original_task: false
15
+ name: 'what_country '
16
+ reference: ''
17
+ 68f9c063-1907-4866-ab1b-756cc57e5695: !Template
18
+ answer_choices: implicit ||| explicit
19
+ id: 68f9c063-1907-4866-ab1b-756cc57e5695
20
+ jinja: "The user is searching for coronavirus results on Bing.com. Is the intent\
21
+ \ implicit or explicit? \n{{Query}}\n|||\n{% if IsImplicitIntent == \"True\"\
22
+ \ %}\n{{answer_choices[0] }}\n{% else %}\n{{answer_choices[1] }}\n{% endif %}"
23
+ metadata: !TemplateMetadata
24
+ choices_in_prompt: true
25
+ languages:
26
+ - en
27
+ metrics:
28
+ - Accuracy
29
+ original_task: false
30
+ name: 'is_implicit_or_explicit '
31
+ reference: ''
32
+ 992d541f-9e0c-466d-b4c4-92e9e236f863: !Template
33
+ answer_choices: implicit ||| explicit
34
+ id: 992d541f-9e0c-466d-b4c4-92e9e236f863
35
+ jinja: "This search query about coronavirus was issued in {{Country}} on {{Date}}.\
36
+ \ Is the intent implicit or explicit? \n{{Query}}\n|||\n{% if IsImplicitIntent\
37
+ \ == \"True\" %}\n{{answer_choices[0] }}\n{% else %}\n{{answer_choices[1] }}\n\
38
+ {% endif %}"
39
+ metadata: !TemplateMetadata
40
+ choices_in_prompt: true
41
+ languages:
42
+ - en
43
+ metrics:
44
+ - Accuracy
45
+ original_task: false
46
+ name: 'is_explicit_country_date '
47
+ reference: ''
48
+ df53652c-36dc-45fe-a015-d0781e32cd33: !Template
49
+ answer_choices: Yes ||| No
50
+ id: df53652c-36dc-45fe-a015-d0781e32cd33
51
+ jinja: "Does this search engine query have an indirect relation to Covid-19? \n\
52
+ {{Query}}\n|||\n{% if IsImplicitIntent == \"True\" %}\n{{answer_choices[0] }}\n\
53
+ {% else %}\n{{answer_choices[1] }}\n{% endif %}"
54
+ metadata: !TemplateMetadata
55
+ choices_in_prompt: false
56
+ languages:
57
+ - en
58
+ metrics:
59
+ - Accuracy
60
+ original_task: false
61
+ name: is_implicit_query
62
+ reference: ''
63
+ df7bc2ee-686c-4826-ad84-3a056a2da4d4: !Template
64
+ answer_choices: No ||| Yes
65
+ id: df7bc2ee-686c-4826-ad84-3a056a2da4d4
66
+ jinja: "Does this search query on Bing.com talk about the coronavirus explicitly?\
67
+ \ \n{{Query}}\n|||\n{% if IsImplicitIntent == \"True\" %}\n{{answer_choices[0]\
68
+ \ }}\n{% else %}\n{{answer_choices[1] }}\n{% endif %}"
69
+ metadata: !TemplateMetadata
70
+ choices_in_prompt: false
71
+ languages:
72
+ - en
73
+ metrics:
74
+ - Accuracy
75
+ original_task: false
76
+ name: is_explicit_query
77
+ reference: ''
promptsource/templates/biosses/templates.yaml ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: biosses
2
+ templates:
3
+ 084e20ea-689d-4813-9db0-04735016aa0b: !Template
4
+ answer_choices: null
5
+ id: 084e20ea-689d-4813-9db0-04735016aa0b
6
+ jinja: 'How similar are the following two sentences? {{sentence1}} {{sentence2}}
7
+
8
+
9
+ Give the answer on a scale from 0 - 4, where 0 is "not similar at all" and 4
10
+ is "means the same thing". |||
11
+
12
+
13
+ {{(((5*score)|round)/5)}}'
14
+ metadata: !TemplateMetadata
15
+ choices_in_prompt: false
16
+ languages:
17
+ - en
18
+ metrics:
19
+ - Pearson Correlation
20
+ original_task: true
21
+ name: similarity with question first
22
+ reference: stsb template from FLAN
23
+ 2aa62df9-5905-4f50-baff-c11986670122: !Template
24
+ answer_choices: null
25
+ id: 2aa62df9-5905-4f50-baff-c11986670122
26
+ jinja: On a scale from 0 to 4, where 0 is "not similar" and 4 is "very similar",
27
+ how similar is the sentence "{{sentence1}}" to the sentence {{sentence2}}"?
28
+ ||| {{(((5*score)|round)/5)}}
29
+ metadata: !TemplateMetadata
30
+ choices_in_prompt: false
31
+ languages:
32
+ - en
33
+ metrics:
34
+ - Pearson Correlation
35
+ original_task: true
36
+ name: compare one sentence to another
37
+ reference: stsb template from FLAN
38
+ 2ec48b7b-c2c8-4253-9c0f-b57814ba0027: !Template
39
+ answer_choices: null
40
+ id: 2ec48b7b-c2c8-4253-9c0f-b57814ba0027
41
+ jinja: "Sentence 1: {{sentence1}} \nSentence 2: {{sentence2}}\n\nFrom 0 to 4 (0\
42
+ \ = \"no meaning overlap\" and 4 = \"means the same thing\"), how similar are\
43
+ \ the two sentences? |||\n\n{{(((5*score)|round)/5)}}"
44
+ metadata: !TemplateMetadata
45
+ choices_in_prompt: false
46
+ languages:
47
+ - en
48
+ metrics:
49
+ - Pearson Correlation
50
+ original_task: true
51
+ name: similarity with sentences first
52
+ reference: stsb template from FLAN
53
+ 400dcb4c-8654-44aa-acec-4dbe108e34a6: !Template
54
+ answer_choices: null
55
+ id: 400dcb4c-8654-44aa-acec-4dbe108e34a6
56
+ jinja: '{{sentence1}} {{sentence2}}
57
+
58
+
59
+ On a scale from 0 to 4, where 0 is "no meaning overlap" and 4 is "means the
60
+ same thing", how closely does the first sentence resemble the second one? |||
61
+
62
+
63
+ {{(((5*score)|round)/5)}}'
64
+ metadata: !TemplateMetadata
65
+ choices_in_prompt: false
66
+ languages:
67
+ - en
68
+ metrics:
69
+ - Pearson Correlation
70
+ original_task: true
71
+ name: resemblance
72
+ reference: stsb template from FLAN
73
+ 5a6bc1a2-8d73-4c57-baa1-cc4b5c4dfacc: !Template
74
+ answer_choices: null
75
+ id: 5a6bc1a2-8d73-4c57-baa1-cc4b5c4dfacc
76
+ jinja: 'Do the following sentences say the same thing? {{sentence1}} {{sentence2}}
77
+
78
+
79
+ Return your answer on a scale from 0 to 4, where 0 is "not similar" and 5 is
80
+ "very similar". |||
81
+
82
+
83
+ {{(((5*score)|round)/5)}}'
84
+ metadata: !TemplateMetadata
85
+ choices_in_prompt: false
86
+ languages:
87
+ - en
88
+ metrics:
89
+ - Pearson Correlation
90
+ original_task: true
91
+ name: same thing scoring
92
+ reference: stsb template from FLAN
93
+ 5c53ce9b-45f6-41ab-9da7-9c24f0f6f56d: !Template
94
+ answer_choices: no ||| yes
95
+ id: 5c53ce9b-45f6-41ab-9da7-9c24f0f6f56d
96
+ jinja: "(1) {{sentence1}} \n(2) {{sentence2}}\n\nDo these two sentences convey\
97
+ \ the same information? |||\n\n{{answer_choices[0 if score < 2.5 else 1]}}"
98
+ metadata: !TemplateMetadata
99
+ choices_in_prompt: false
100
+ languages:
101
+ - en
102
+ metrics:
103
+ - Accuracy
104
+ original_task: false
105
+ name: same info binary
106
+ reference: paws_wiki from FLAN
107
+ c1b48040-b083-4501-a7ef-a21b65800eb6: !Template
108
+ answer_choices: null
109
+ id: c1b48040-b083-4501-a7ef-a21b65800eb6
110
+ jinja: '{{sentence1}} {{sentence2}}
111
+
112
+
113
+ Rate the textual similarity of these two sentences on a scale of {{"0.0"}} and
114
+ {{"4.0"}}, where 0 is "no relation" and 4 is "equivalent". |||
115
+
116
+
117
+ {{(((5*score)|round)/5)}}'
118
+ metadata: !TemplateMetadata
119
+ choices_in_prompt: false
120
+ languages:
121
+ - en
122
+ metrics:
123
+ - Pearson Correlation
124
+ original_task: true
125
+ name: rate with sentences first
126
+ reference: stsb template from FLAN
127
+ d52895b8-71bb-4b87-a20f-e8eae53ede92: !Template
128
+ answer_choices: no ||| yes
129
+ id: d52895b8-71bb-4b87-a20f-e8eae53ede92
130
+ jinja: Please check if these have the same meaning. Answer "yes" if they do, otherwise
131
+ "no". {{sentence1}} {{sentence2}} ||| {{answer_choices[0 if score < 2.5 else
132
+ 1]}}
133
+ metadata: !TemplateMetadata
134
+ choices_in_prompt: true
135
+ languages:
136
+ - en
137
+ metrics:
138
+ - Accuracy
139
+ original_task: false
140
+ name: same meaning binary
141
+ reference: paws_wiki from FLAN
142
+ e22d8c63-3184-40df-84c2-6800960496a7: !Template
143
+ answer_choices: no ||| yes
144
+ id: e22d8c63-3184-40df-84c2-6800960496a7
145
+ jinja: Do "{{sentence1}}" and "{{sentence2}}" seem similar to you ? ||| {{answer_choices[0
146
+ if score < 2.5 else 1]}}
147
+ metadata: !TemplateMetadata
148
+ choices_in_prompt: false
149
+ languages:
150
+ - en
151
+ metrics:
152
+ - Accuracy
153
+ original_task: false
154
+ name: similarity binary
155
+ reference: stsb_multi_mt
156
+ f2b20779-4ac9-41d9-9660-b9c5223fe9c1: !Template
157
+ answer_choices: null
158
+ id: f2b20779-4ac9-41d9-9660-b9c5223fe9c1
159
+ jinja: 'Rate the similarity of these two sentences: ({{"0.0"}} being the lowest
160
+ and {{"4.0"}} the highest) "{{sentence1}}" and "{{sentence2}}" |||
161
+
162
+
163
+ {{(((5*score)|round)/5)}}'
164
+ metadata: !TemplateMetadata
165
+ choices_in_prompt: false
166
+ languages:
167
+ - en
168
+ metrics:
169
+ - Pearson Correlation
170
+ original_task: true
171
+ name: rate with question first
172
+ reference: stsb_multi_mt
173
+ fc22748c-72c0-4727-bc4e-53aae4449bef: !Template
174
+ answer_choices: no ||| yes
175
+ id: fc22748c-72c0-4727-bc4e-53aae4449bef
176
+ jinja: Do you think "{{sentence1}}" and "{{sentence2}}" express the same thing?
177
+ ||| {{answer_choices[0 if score < 2.5 else 1]}}
178
+ metadata: !TemplateMetadata
179
+ choices_in_prompt: false
180
+ languages:
181
+ - en
182
+ metrics:
183
+ - Accuracy
184
+ original_task: false
185
+ name: same thing binary
186
+ reference: stsb_multi_mt
promptsource/templates/blbooksgenre/title_genre_classifiction/templates.yaml ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: blbooksgenre
2
+ subset: title_genre_classifiction
3
+ templates:
4
+ 0c3e83f4-7f4d-4eca-8f80-6b6bdd8eeedd: !Template
5
+ answer_choices: Fiction ||| Non-fiction
6
+ id: 0c3e83f4-7f4d-4eca-8f80-6b6bdd8eeedd
7
+ jinja: "Given the title: {{title}}, which of the following genres is the book?\n\
8
+ (a) {{ answer_choices[0] }}\n(b) {{ answer_choices[1] }}\n|||\n {{ answer_choices[label]\
9
+ \ }}"
10
+ metadata: !TemplateMetadata
11
+ choices_in_prompt: true
12
+ languages:
13
+ - en
14
+ metrics:
15
+ - Accuracy
16
+ - AUC
17
+ original_task: true
18
+ name: multi-choice
19
+ reference: ''
20
+ 5564acb9-c911-4d71-ba4d-add444aaf1e3: !Template
21
+ answer_choices: True ||| False
22
+ id: 5564acb9-c911-4d71-ba4d-add444aaf1e3
23
+ jinja: "{{title}} is the title of a fictional book, True or False?\nAnswer: \n\
24
+ |||\n{{ answer_choices[label] }}"
25
+ metadata: !TemplateMetadata
26
+ choices_in_prompt: true
27
+ languages:
28
+ - en
29
+ metrics:
30
+ - Accuracy
31
+ - AUC
32
+ original_task: true
33
+ name: premise_context_first
34
+ reference: ''
35
+ afc18daa-999d-495f-908a-d99477f6f5ac: !Template
36
+ answer_choices: True ||| False
37
+ id: afc18daa-999d-495f-908a-d99477f6f5ac
38
+ jinja: "The following is the title of a fictional book, True or False?\n{{title}}\n\
39
+ Answer: \n|||\n{{ answer_choices[label] }}"
40
+ metadata: !TemplateMetadata
41
+ choices_in_prompt: true
42
+ languages:
43
+ - en
44
+ metrics:
45
+ - Accuracy
46
+ - AUC
47
+ original_task: true
48
+ name: premise
49
+ reference: ''
50
+ cf4b6ce0-ff87-4c7a-9b9e-ec7c4cf741d8: !Template
51
+ answer_choices: Fiction ||| Non-fiction
52
+ id: cf4b6ce0-ff87-4c7a-9b9e-ec7c4cf741d8
53
+ jinja: The genre of the book "{{title}}" is ||| {{ answer_choices[label] }}
54
+ metadata: !TemplateMetadata
55
+ choices_in_prompt: false
56
+ languages:
57
+ - en
58
+ metrics:
59
+ - Accuracy
60
+ - AUC
61
+ original_task: true
62
+ name: classify
63
+ reference: ''
promptsource/templates/blended_skill_talk/templates.yaml ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: blended_skill_talk
2
+ templates:
3
+ 54f785e9-453a-4ffe-8181-28095e3f2b80: !Template
4
+ answer_choices: null
5
+ id: 54f785e9-453a-4ffe-8181-28095e3f2b80
6
+ jinja: "Given the below conversation between two people, what would the listener\
7
+ \ say?\n\nA: {{previous_utterance[0]}}\n\nB: {{previous_utterance[1]}}\n{% for\
8
+ \ message_f, message_g in zip(free_messages[:-1], guided_messages[:-1]) %}\n\
9
+ A: {{message_f}}\n\nB: {{message_g}}\n{% endfor %} \nA: {{free_messages[-1]}}\n\
10
+ \nB: \n|||\n{{guided_messages[-1]}}"
11
+ metadata: !TemplateMetadata
12
+ choices_in_prompt: false
13
+ languages:
14
+ - en
15
+ metrics:
16
+ - BLEU
17
+ - ROUGE
18
+ original_task: false
19
+ name: guess-last-utterance
20
+ reference: ''
21
+ 58f4e068-26fa-4843-a1d6-54bde324e780: !Template
22
+ answer_choices: Yes ||| No
23
+ id: 58f4e068-26fa-4843-a1d6-54bde324e780
24
+ jinja: "Two people are having a conversation. Are the utterances in the correct\
25
+ \ order? \n\nYour answer should be either \"Yes\" or \"No\".\n{% if range(0,\
26
+ \ 2) | choice %}\nA: {{previous_utterance[0]}}\n\nB: {{previous_utterance[1]}}\n\
27
+ {% for message_f, message_g in zip(free_messages, guided_messages) %}\nA: {{message_f}}\n\
28
+ \nB: {{message_g}}\n{% endfor %} \n\n|||\nYes.\n{% else %}\nA: {{previous_utterance[1]}}\n\
29
+ \nB: {{previous_utterance[0]}}\n{% for message_f, message_g in zip(guided_messages,\
30
+ \ free_messages) %}\nA: {{message_f}}\n\nB: {{message_g}}\n{% endfor %} \n\n\
31
+ |||\nNo.\n{% endif %}"
32
+ metadata: !TemplateMetadata
33
+ choices_in_prompt: true
34
+ languages:
35
+ - en
36
+ metrics:
37
+ - Accuracy
38
+ original_task: false
39
+ name: guess-correct-order
40
+ reference: ''
41
+ 8792b63e-7217-40fe-8130-7392baca3519: !Template
42
+ answer_choices: null
43
+ id: 8792b63e-7217-40fe-8130-7392baca3519
44
+ jinja: "Two people are talking to each other. What do you think Person A said\
45
+ \ in the beginning?\n\nPerson B: {{previous_utterance[1]}}\n{% for message_f,\
46
+ \ message_g in zip(free_messages, guided_messages) %}\nPerson A: {{message_f}}\n\
47
+ \nPerson B: {{message_g}}\n{% endfor %} \n|||\n{{previous_utterance[0]}}\n"
48
+ metadata: !TemplateMetadata
49
+ choices_in_prompt: false
50
+ languages:
51
+ - en
52
+ metrics:
53
+ - BLEU
54
+ - ROUGE
55
+ original_task: false
56
+ name: guess-first-utterance
57
+ reference: ''
promptsource/templates/cbt/CN/templates.yaml ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: cbt
2
+ subset: CN
3
+ templates:
4
+ 08820238-5bb3-4c7c-98bb-ec3d81e432e7: !Template
5
+ answer_choices: null
6
+ id: 08820238-5bb3-4c7c-98bb-ec3d81e432e7
7
+ jinja: '{{sentences | join('' '')}}
8
+
9
+
10
+ Write the next sentence of this story.
11
+
12
+ |||
13
+
14
+ {{ question.replace("XXXXX", answer) }}'
15
+ metadata: !TemplateMetadata
16
+ choices_in_prompt: false
17
+ languages:
18
+ - en
19
+ metrics:
20
+ - BLEU
21
+ - ROUGE
22
+ original_task: false
23
+ name: Generate Next Sentence
24
+ reference: Generate the next sentence given the story.
25
+ 1f8cad96-4c0f-435a-9a6f-653fcf158dd0: !Template
26
+ answer_choices: '{{options|join("|||")}}'
27
+ id: 1f8cad96-4c0f-435a-9a6f-653fcf158dd0
28
+ jinja: '{{sentences | join ('' '')}} {{question}}
29
+
30
+
31
+ Replace {{"XXXXX"}} with the correct option from:
32
+
33
+ {{answer_choices|join(", ")}}
34
+
35
+ |||
36
+
37
+ {{ answer }}'
38
+ metadata: !TemplateMetadata
39
+ choices_in_prompt: true
40
+ languages:
41
+ - en
42
+ metrics:
43
+ - Accuracy
44
+ original_task: true
45
+ name: Fill Blank with Options - Replace
46
+ reference: Fill the blank given the options.
47
+ 556ee207-18c9-4c6c-860a-8ea09b93505c: !Template
48
+ answer_choices: '{{options|join(''|||'')}}'
49
+ id: 556ee207-18c9-4c6c-860a-8ea09b93505c
50
+ jinja: "{{sentences | join (' ')}}\n\nIn this following sentence: \n\"{{question}}\"\
51
+ , \naptly substitute the {{\"XXXXX\"}} with one of the following options:\n\
52
+ {{answer_choices|join(\", \")}}\n|||\n{{ answer }}"
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: true
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - Accuracy
59
+ original_task: true
60
+ name: Fill Blank with Options - In the following
61
+ reference: Fill in the blanks given options.
62
+ 63bfa7b6-b566-4693-848c-e05cd7a12a03: !Template
63
+ answer_choices: '{{options|join("|||")}}'
64
+ id: 63bfa7b6-b566-4693-848c-e05cd7a12a03
65
+ jinja: '{{ sentences | join('' '') }} {{question}}
66
+
67
+
68
+ Fill in the {{"XXXXX"}}.
69
+
70
+ |||
71
+
72
+ {{ answer }}'
73
+ metadata: !TemplateMetadata
74
+ choices_in_prompt: false
75
+ languages:
76
+ - en
77
+ metrics:
78
+ - Accuracy
79
+ original_task: true
80
+ name: Fill Blank without Options
81
+ reference: Fill in the blank without options.
82
+ a2e38459-90d9-4292-9d96-491ad7d4e3db: !Template
83
+ answer_choices: '{{options|join("|||")}}'
84
+ id: a2e38459-90d9-4292-9d96-491ad7d4e3db
85
+ jinja: '{{sentences | join ('' '')}} {{question}}
86
+
87
+
88
+ Which of the following options replaces {{"XXXXX"}} the best in the above story?
89
+
90
+ {{answer_choices|join(", ")}}
91
+
92
+ |||
93
+
94
+ {{ answer }}'
95
+ metadata: !TemplateMetadata
96
+ choices_in_prompt: true
97
+ languages:
98
+ - en
99
+ metrics:
100
+ - Accuracy
101
+ original_task: true
102
+ name: Fill Blanks with Options - above story
103
+ reference: Given the sentences, fill the blanks using the options.
104
+ a6fa37d5-899c-4ad0-b888-fab04cc8e423: !Template
105
+ answer_choices: '{{options|join("|||")}}'
106
+ id: a6fa37d5-899c-4ad0-b888-fab04cc8e423
107
+ jinja: '{{sentences | join ('' '')}}
108
+
109
+
110
+ Which of the following options replaces {{"XXXXX"}} in "{{question}}"?
111
+
112
+ {{answer_choices|join(", ")}}
113
+
114
+ |||
115
+
116
+ {{ answer }}'
117
+ metadata: !TemplateMetadata
118
+ choices_in_prompt: true
119
+ languages:
120
+ - en
121
+ metrics:
122
+ - Accuracy
123
+ original_task: true
124
+ name: Fill Blank with Options - Which
125
+ reference: Fill Blank given options.
126
+ a8b67815-1927-4ef3-8d04-8d3f95525ef5: !Template
127
+ answer_choices: '{{options|join("|||")}}'
128
+ id: a8b67815-1927-4ef3-8d04-8d3f95525ef5
129
+ jinja: '{{sentences | join ('' '')}} {{question}}
130
+
131
+
132
+ Fill in the {{"XXXXX"}} from the following choices:
133
+
134
+ {{answer_choices|join(", ")}}
135
+
136
+ |||
137
+
138
+ {{ answer }}'
139
+ metadata: !TemplateMetadata
140
+ choices_in_prompt: true
141
+ languages:
142
+ - en
143
+ metrics:
144
+ - Accuracy
145
+ original_task: true
146
+ name: Fill Blank with Options - Fill in
147
+ reference: Fill in the blank given options
promptsource/templates/cbt/NE/templates.yaml ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: cbt
2
+ subset: NE
3
+ templates:
4
+ 08820238-5bb3-4c7c-98bb-ec3d81e432bb: !Template
5
+ answer_choices: null
6
+ id: 08820238-5bb3-4c7c-98bb-ec3d81e432bb
7
+ jinja: '{{sentences | join('' '')}}
8
+
9
+
10
+ Write the next sentence of this story.
11
+
12
+ |||
13
+
14
+ {{ question.replace("XXXXX", answer) }}'
15
+ metadata: !TemplateMetadata
16
+ choices_in_prompt: false
17
+ languages:
18
+ - en
19
+ metrics:
20
+ - BLEU
21
+ - ROUGE
22
+ original_task: false
23
+ name: Generate Next Sentence
24
+ reference: Generate the next sentence given the story.
25
+ 1f8cad96-4c0f-435a-9a6f-653fcf158dbb: !Template
26
+ answer_choices: '{{options|join("|||")}}'
27
+ id: 1f8cad96-4c0f-435a-9a6f-653fcf158dbb
28
+ jinja: '{{sentences | join ('' '')}} {{question}}
29
+
30
+
31
+ Replace {{"XXXXX"}} with the correct option from:
32
+
33
+ {{answer_choices|join(", ")}}
34
+
35
+ |||
36
+
37
+ {{ answer }}'
38
+ metadata: !TemplateMetadata
39
+ choices_in_prompt: true
40
+ languages:
41
+ - en
42
+ metrics:
43
+ - Accuracy
44
+ original_task: true
45
+ name: Fill Blank with Options - Replace
46
+ reference: Fill the blank given the options.
47
+ 556ee207-18c9-4c6c-860a-8ea09b9350bb: !Template
48
+ answer_choices: '{{options|join(''|||'')}}'
49
+ id: 556ee207-18c9-4c6c-860a-8ea09b9350bb
50
+ jinja: "{{sentences | join (' ')}}\n\nIn this following sentence: \n\"{{question}}\"\
51
+ , \naptly substitute the {{\"XXXXX\"}} with one of the following options:\n\
52
+ {{answer_choices|join(\", \")}}\n|||\n{{ answer }}"
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: true
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - Accuracy
59
+ original_task: true
60
+ name: Fill Blank with Options - In the following
61
+ reference: Fill in the blanks given options.
62
+ 63bfa7b6-b566-4693-848c-e05cd7a12abb: !Template
63
+ answer_choices: '{{options|join("|||")}}'
64
+ id: 63bfa7b6-b566-4693-848c-e05cd7a12abb
65
+ jinja: '{{ sentences | join('' '') }} {{question}}
66
+
67
+
68
+ Fill in the {{"XXXXX"}}.
69
+
70
+ |||
71
+
72
+ {{ answer }}'
73
+ metadata: !TemplateMetadata
74
+ choices_in_prompt: false
75
+ languages:
76
+ - en
77
+ metrics:
78
+ - Accuracy
79
+ original_task: true
80
+ name: Fill Blank without Options
81
+ reference: Fill in the blank without options.
82
+ a2e38459-90d9-4292-9d96-491ad7d4e3bb: !Template
83
+ answer_choices: '{{options|join("|||")}}'
84
+ id: a2e38459-90d9-4292-9d96-491ad7d4e3bb
85
+ jinja: '{{sentences | join ('' '')}} {{question}}
86
+
87
+
88
+ Which of the following options replaces {{"XXXXX"}} the best in the above story?
89
+
90
+ {{answer_choices|join(", ")}}
91
+
92
+ |||
93
+
94
+ {{ answer }}'
95
+ metadata: !TemplateMetadata
96
+ choices_in_prompt: true
97
+ languages:
98
+ - en
99
+ metrics:
100
+ - Accuracy
101
+ original_task: true
102
+ name: Fill Blanks with Options - above story
103
+ reference: Given the sentences, fill the blanks using the options.
104
+ a6fa37d5-899c-4ad0-b888-fab04cc8e4bb: !Template
105
+ answer_choices: '{{options|join("|||")}}'
106
+ id: a6fa37d5-899c-4ad0-b888-fab04cc8e4bb
107
+ jinja: '{{sentences | join ('' '')}}
108
+
109
+
110
+ Which of the following options replaces {{"XXXXX"}} in "{{question}}"?
111
+
112
+ {{answer_choices|join(", ")}}
113
+
114
+ |||
115
+
116
+ {{ answer }}'
117
+ metadata: !TemplateMetadata
118
+ choices_in_prompt: true
119
+ languages:
120
+ - en
121
+ metrics:
122
+ - Accuracy
123
+ original_task: true
124
+ name: Fill Blank with Options - Which
125
+ reference: Fill Blank given options.
126
+ a8b67815-1927-4ef3-8d04-8d3f95525ebb: !Template
127
+ answer_choices: '{{options|join("|||")}}'
128
+ id: a8b67815-1927-4ef3-8d04-8d3f95525ebb
129
+ jinja: '{{sentences | join ('' '')}} {{question}}
130
+
131
+
132
+ Fill in the {{"XXXXX"}} from the following choices:
133
+
134
+ {{answer_choices|join(", ")}}
135
+
136
+ |||
137
+
138
+ {{ answer }}'
139
+ metadata: !TemplateMetadata
140
+ choices_in_prompt: true
141
+ languages:
142
+ - en
143
+ metrics:
144
+ - Accuracy
145
+ original_task: true
146
+ name: Fill Blank with Options - Fill in
147
+ reference: Fill in the blank given options
promptsource/templates/cbt/P/templates.yaml ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: cbt
2
+ subset: P
3
+ templates:
4
+ 08820238-5bb3-4c7c-98bb-ec3d81e432ea: !Template
5
+ answer_choices: null
6
+ id: 08820238-5bb3-4c7c-98bb-ec3d81e432ea
7
+ jinja: '{{sentences | join('' '')}}
8
+
9
+
10
+ Write the next sentence of this story.
11
+
12
+ |||
13
+
14
+ {{ question.replace("XXXXX", answer) }}'
15
+ metadata: !TemplateMetadata
16
+ choices_in_prompt: false
17
+ languages:
18
+ - en
19
+ metrics:
20
+ - BLEU
21
+ - ROUGE
22
+ original_task: false
23
+ name: Generate Next Sentence
24
+ reference: Generate the next sentence given the story.
25
+ 1f8cad96-4c0f-435a-9a6f-653fcf158dda: !Template
26
+ answer_choices: '{{options|join("|||")}}'
27
+ id: 1f8cad96-4c0f-435a-9a6f-653fcf158dda
28
+ jinja: '{{sentences | join ('' '')}} {{question}}
29
+
30
+
31
+ Replace {{"XXXXX"}} with the correct option from:
32
+
33
+ {{answer_choices|join(", ")}}
34
+
35
+ |||
36
+
37
+ {{ answer }}'
38
+ metadata: !TemplateMetadata
39
+ choices_in_prompt: true
40
+ languages:
41
+ - en
42
+ metrics:
43
+ - Accuracy
44
+ original_task: true
45
+ name: Fill Blank with Options - Replace
46
+ reference: Fill the blank given the options.
47
+ 556ee207-18c9-4c6c-860a-8ea09b93505a: !Template
48
+ answer_choices: '{{options|join(''|||'')}}'
49
+ id: 556ee207-18c9-4c6c-860a-8ea09b93505a
50
+ jinja: "{{sentences | join (' ')}}\n\nIn this following sentence: \n\"{{question}}\"\
51
+ , \naptly substitute the {{\"XXXXX\"}} with one of the following options:\n\
52
+ {{answer_choices|join(\", \")}}\n|||\n{{ answer }}"
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: true
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - Accuracy
59
+ original_task: true
60
+ name: Fill Blank with Options - In the following
61
+ reference: Fill in the blanks given options.
62
+ 63bfa7b6-b566-4693-848c-e05cd7a12a0a: !Template
63
+ answer_choices: '{{options|join("|||")}}'
64
+ id: 63bfa7b6-b566-4693-848c-e05cd7a12a0a
65
+ jinja: '{{ sentences | join('' '') }} {{question}}
66
+
67
+
68
+ Fill in the {{"XXXXX"}}.
69
+
70
+ |||
71
+
72
+ {{ answer }}'
73
+ metadata: !TemplateMetadata
74
+ choices_in_prompt: false
75
+ languages:
76
+ - en
77
+ metrics:
78
+ - Accuracy
79
+ original_task: true
80
+ name: Fill Blank without Options
81
+ reference: Fill in the blank without options.
82
+ a2e38459-90d9-4292-9d96-491ad7d4e3da: !Template
83
+ answer_choices: '{{options|join("|||")}}'
84
+ id: a2e38459-90d9-4292-9d96-491ad7d4e3da
85
+ jinja: '{{sentences | join ('' '')}} {{question}}
86
+
87
+
88
+ Which of the following options replaces {{"XXXXX"}} the best in the above story?
89
+
90
+ {{answer_choices|join(", ")}}
91
+
92
+ |||
93
+
94
+ {{ answer }}'
95
+ metadata: !TemplateMetadata
96
+ choices_in_prompt: true
97
+ languages:
98
+ - en
99
+ metrics:
100
+ - Accuracy
101
+ original_task: true
102
+ name: Fill Blanks with Options - above story
103
+ reference: Given the sentences, fill the blanks using the options.
104
+ a6fa37d5-899c-4ad0-b888-fab04cc8e42a: !Template
105
+ answer_choices: '{{options|join("|||")}}'
106
+ id: a6fa37d5-899c-4ad0-b888-fab04cc8e42a
107
+ jinja: '{{sentences | join ('' '')}}
108
+
109
+
110
+ Which of the following options replaces {{"XXXXX"}} in "{{question}}"?
111
+
112
+ {{answer_choices|join(", ")}}
113
+
114
+ |||
115
+
116
+ {{ answer }}'
117
+ metadata: !TemplateMetadata
118
+ choices_in_prompt: true
119
+ languages:
120
+ - en
121
+ metrics:
122
+ - Accuracy
123
+ original_task: true
124
+ name: Fill Blank with Options - Which
125
+ reference: Fill Blank given options.
126
+ a8b67815-1927-4ef3-8d04-8d3f95525efa: !Template
127
+ answer_choices: '{{options|join("|||")}}'
128
+ id: a8b67815-1927-4ef3-8d04-8d3f95525efa
129
+ jinja: '{{sentences | join ('' '')}} {{question}}
130
+
131
+
132
+ Fill in the {{"XXXXX"}} from the following choices:
133
+
134
+ {{answer_choices|join(", ")}}
135
+
136
+ |||
137
+
138
+ {{ answer }}'
139
+ metadata: !TemplateMetadata
140
+ choices_in_prompt: true
141
+ languages:
142
+ - en
143
+ metrics:
144
+ - Accuracy
145
+ original_task: true
146
+ name: Fill Blank with Options - Fill in
147
+ reference: Fill in the blank given options
promptsource/templates/cbt/V/templates.yaml ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset: cbt
2
+ subset: V
3
+ templates:
4
+ 08820238-5bb3-4c7c-98bb-ec3d81e432cc: !Template
5
+ answer_choices: null
6
+ id: 08820238-5bb3-4c7c-98bb-ec3d81e432cc
7
+ jinja: '{{sentences | join('' '')}}
8
+
9
+
10
+ Write the next sentence of this story.
11
+
12
+ |||
13
+
14
+ {{ question.replace("XXXXX", answer) }}'
15
+ metadata: !TemplateMetadata
16
+ choices_in_prompt: false
17
+ languages:
18
+ - en
19
+ metrics:
20
+ - BLEU
21
+ - ROUGE
22
+ original_task: false
23
+ name: Generate Next Sentence
24
+ reference: Generate the next sentence given the story.
25
+ 1f8cad96-4c0f-435a-9a6f-653fcf158dcc: !Template
26
+ answer_choices: '{{options|join("|||")}}'
27
+ id: 1f8cad96-4c0f-435a-9a6f-653fcf158dcc
28
+ jinja: '{{sentences | join ('' '')}} {{question}}
29
+
30
+
31
+ Replace {{"XXXXX"}} with the correct option from:
32
+
33
+ {{answer_choices|join(", ")}}
34
+
35
+ |||
36
+
37
+ {{ answer }}'
38
+ metadata: !TemplateMetadata
39
+ choices_in_prompt: true
40
+ languages:
41
+ - en
42
+ metrics:
43
+ - Accuracy
44
+ original_task: true
45
+ name: Fill Blank with Options - Replace
46
+ reference: Fill the blank given the options.
47
+ 556ee207-18c9-4c6c-860a-8ea09b9350cc: !Template
48
+ answer_choices: '{{options|join(''|||'')}}'
49
+ id: 556ee207-18c9-4c6c-860a-8ea09b9350cc
50
+ jinja: "{{sentences | join (' ')}}\n\nIn this following sentence: \n\"{{question}}\"\
51
+ , \naptly substitute the {{\"XXXXX\"}} with one of the following options:\n\
52
+ {{answer_choices|join(\", \")}}\n|||\n{{ answer }}"
53
+ metadata: !TemplateMetadata
54
+ choices_in_prompt: true
55
+ languages:
56
+ - en
57
+ metrics:
58
+ - Accuracy
59
+ original_task: true
60
+ name: Fill Blank with Options - In the following
61
+ reference: Fill in the blanks given options.
62
+ 63bfa7b6-b566-4693-848c-e05cd7a12acc: !Template
63
+ answer_choices: '{{options|join("|||")}}'
64
+ id: 63bfa7b6-b566-4693-848c-e05cd7a12acc
65
+ jinja: '{{ sentences | join('' '') }} {{question}}
66
+
67
+
68
+ Fill in the {{"XXXXX"}}.
69
+
70
+ |||
71
+
72
+ {{ answer }}'
73
+ metadata: !TemplateMetadata
74
+ choices_in_prompt: false
75
+ languages:
76
+ - en
77
+ metrics:
78
+ - Accuracy
79
+ original_task: true
80
+ name: Fill Blank without Options
81
+ reference: Fill in the blank without options.
82
+ a2e38459-90d9-4292-9d96-491ad7d4e3cc: !Template
83
+ answer_choices: '{{options|join("|||")}}'
84
+ id: a2e38459-90d9-4292-9d96-491ad7d4e3cc
85
+ jinja: '{{sentences | join ('' '')}} {{question}}
86
+
87
+
88
+ Which of the following options replaces {{"XXXXX"}} the best in the above story?
89
+
90
+ {{answer_choices|join(", ")}}
91
+
92
+ |||
93
+
94
+ {{ answer }}'
95
+ metadata: !TemplateMetadata
96
+ choices_in_prompt: true
97
+ languages:
98
+ - en
99
+ metrics:
100
+ - Accuracy
101
+ original_task: true
102
+ name: Fill Blanks with Options - above story
103
+ reference: Given the sentences, fill the blanks using the options.
104
+ a6fa37d5-899c-4ad0-b888-fab04cc8e4cc: !Template
105
+ answer_choices: '{{options|join("|||")}}'
106
+ id: a6fa37d5-899c-4ad0-b888-fab04cc8e4cc
107
+ jinja: '{{sentences | join ('' '')}}
108
+
109
+
110
+ Which of the following options replaces {{"XXXXX"}} in "{{question}}"?
111
+
112
+ {{answer_choices|join(", ")}}
113
+
114
+ |||
115
+
116
+ {{ answer }}'
117
+ metadata: !TemplateMetadata
118
+ choices_in_prompt: true
119
+ languages:
120
+ - en
121
+ metrics:
122
+ - Accuracy
123
+ original_task: true
124
+ name: Fill Blank with Options - Which
125
+ reference: Fill Blank given options.
126
+ a8b67815-1927-4ef3-8d04-8d3f95525ecc: !Template
127
+ answer_choices: '{{options|join("|||")}}'
128
+ id: a8b67815-1927-4ef3-8d04-8d3f95525ecc
129
+ jinja: '{{sentences | join ('' '')}} {{question}}
130
+
131
+
132
+ Fill in the {{"XXXXX"}} from the following choices:
133
+
134
+ {{answer_choices|join(", ")}}
135
+
136
+ |||
137
+
138
+ {{ answer }}'
139
+ metadata: !TemplateMetadata
140
+ choices_in_prompt: true
141
+ languages:
142
+ - en
143
+ metrics:
144
+ - Accuracy
145
+ original_task: true
146
+ name: Fill Blank with Options - Fill in
147
+ reference: Fill in the blank given options