Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
License:
VictorSanh commited on
Commit
e37c8bf
1 Parent(s): ab2ca50

rename data loading script

Browse files
Files changed (1) hide show
  1. p3.py → P3.py +7 -1
p3.py → P3.py RENAMED
@@ -27,7 +27,11 @@ _CITATION = """\
27
  TODO"""
28
 
29
  _DESCRIPTION = """\
30
- TODO
 
 
 
 
31
  """
32
 
33
  _LICENSE = "Apache License 2.0"
@@ -79,6 +83,8 @@ def find_task_splits_and_features():
79
  """Find the available tasks under ./data and their available splits and features."""
80
  task_and_their_splits = defaultdict(dict)
81
  for stats in glob.glob(f"{_DATA_PATH}/*/stats.*.json"):
 
 
82
  folder_path = os.path.dirname(stats)
83
  task_name = folder_path.split("/")[-1]
84
  split_name = os.path.basename(stats).split(".")[1]
 
27
  TODO"""
28
 
29
  _DESCRIPTION = """\
30
+ P3 is a collection of prompted English datasets covering a diverse set of NLP tasks. A prompt is the combination of an input template and a target template. The templates are functions mapping a data example into natural language for the input and target sequences. For example, in the case of an NLI dataset, the data example would include fields for *Premise, Hypothesis, Label*. An input template would be *If {Premise} is true, is it also true that {Hypothesis}?*, whereas a target template can be defined with the label choices *Choices[label]*. Here *Choices* is prompt-specific metadata that consists of the options *yes, maybe, no* corresponding to *label* being entailment (0), neutral (1) or contradiction (2).
31
+
32
+ Prompts are collected using [Promptsource](https://github.com/bigscience-workshop/promptsource), an interface to interactively write prompts on datasets, and collect prompt-specific metadata such as evaluation metrics. As of October 13th, there are 2'000 prompts collected for 270+ data(sub)sets. The collection of prompts is publicly available on [Promptsource](https://github.com/bigscience-workshop/promptsource).
33
+
34
+ To train [T0*](https://huggingface.co/bigscience/T0pp), we used a subset of the prompts available in Promptsource (see details [here](https://huggingface.co/bigscience/T0pp#training-data)). However, some of the prompts use `random.choice`, a method that selects uniformly at random an option in a list of valid possibilities. For reproducibility purposes, we release the collection of prompted examples used to train T0*. **The data available here are the materialized version of the prompted datasets used in [Multi-task enables task zero-shot generalization](TODO) which represent only a subset datasets for which there is at least one prompt on Promptsource.**
35
  """
36
 
37
  _LICENSE = "Apache License 2.0"
 
83
  """Find the available tasks under ./data and their available splits and features."""
84
  task_and_their_splits = defaultdict(dict)
85
  for stats in glob.glob(f"{_DATA_PATH}/*/stats.*.json"):
86
+ if "anli" not in stats:
87
+ continue
88
  folder_path = os.path.dirname(stats)
89
  task_name = folder_path.split("/")[-1]
90
  split_name = os.path.basename(stats).split(".")[1]