MicPie commited on
Commit
7bf4b56
1 Parent(s): ea1408e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -8
README.md CHANGED
@@ -190,16 +190,13 @@ The AdapTable datasets do not come with additional data splits.
190
 
191
  ### Curation Rationale
192
 
193
- How do we convert tables to few-shot tasks?
194
- Unlike unstructured text, structured data in the form of tables lends itself easily to the few-shot task format. Given a table where each row is an instance of a similar class and the columns describe the attributes of each instance, we can turn each row into a task example to predict one attribute given the others. When the table has more than one row, we instantly have multiple examples of this task by using each row as a single example, and thus each table becomes a few-shot dataset for a particular task.
195
-
196
- The few-shot setting in this setup is significant: Tables often do not come with clear instructions for each field, so tasks may be underspecified if prompted in a zero-shot manner, but the intended task becomes clearer when examples are provided. This makes a good two-way match: The few-shot format is a perfect setup for table learning, and tables provide a natural dataset for few-shot training.
197
 
198
  ### Source Data
199
 
200
  #### Initial Data Collection and Normalization
201
 
202
- The data processing pipeline is explained in detail in our publication.
203
 
204
  #### Who are the source language producers?
205
 
@@ -209,12 +206,11 @@ The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/
209
 
210
  #### Annotation process
211
 
212
- No manual annotation process used.
213
- Only for the [AdapTable-rated-low](https://huggingface.co/datasets/MicPie/adaptable_rated-low), [AdapTable-rated-medium](https://huggingface.co/datasets/MicPie/adaptable_rated-medium), and [AdapTable-rated-high](https://huggingface.co/datasets/MicPie/adaptable_rated-high) manual annotations were carried out.
214
 
215
  #### Who are the annotators?
216
 
217
- People involved in the publication.
218
 
219
  ### Personal and Sensitive Information
220
 
 
190
 
191
  ### Curation Rationale
192
 
193
+ Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,350 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
 
 
 
194
 
195
  ### Source Data
196
 
197
  #### Initial Data Collection and Normalization
198
 
199
+ We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
200
 
201
  #### Who are the source language producers?
202
 
 
206
 
207
  #### Annotation process
208
 
209
+ Manual annotation was only carried out for the [AdapTable-rated-low](https://huggingface.co/datasets/MicPie/adaptable_rated-low), [AdapTable-rated-medium](https://huggingface.co/datasets/MicPie/adaptable_rated-medium), and [AdapTable-rated-high](https://huggingface.co/datasets/MicPie/adaptable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
 
210
 
211
  #### Who are the annotators?
212
 
213
+ Annotations were carried out by a lab assistant.
214
 
215
  ### Personal and Sensitive Information
216