lhoestq HF staff commited on
Commit
0246001
1 Parent(s): ba19f49

Update datasets task tags to align tags with models (#4067)

Browse files

* update tasks list

* update tags in dataset cards

* more cards updates

* update dataset tags parser

* fix multi-choice-qa

* style

* small improvements in some dataset cards

* allow certain tag fields to be empty

* update vision datasets tags

* use multi-class-image-classification and remove other tags

Commit from https://github.com/huggingface/datasets/commit/edb4411d4e884690b8b328dba4360dbda6b3cbc8

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,10 +14,10 @@ size_categories:
14
  source_datasets:
15
  - original
16
  task_categories:
17
- - question-answering
18
  task_ids:
19
  - abstractive-qa
20
- - open-domain-qa
21
  paperswithcode_id: eli5
22
  pretty_name: ELI5
23
  ---
@@ -61,7 +61,7 @@ The ELI5 dataset is an English-language dataset of questions and answers gathere
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
- - `abstractive-qa`, `open-domain-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation).
65
 
66
  ### Languages
67
 
 
14
  source_datasets:
15
  - original
16
  task_categories:
17
+ - text2text-generation
18
  task_ids:
19
  - abstractive-qa
20
+ - open-domain-abstrative-qa
21
  paperswithcode_id: eli5
22
  pretty_name: ELI5
23
  ---
 
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
+ - `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation).
65
 
66
  ### Languages
67