lhoestq HF staff commited on
Commit
df1a940
1 Parent(s): 8b0ffe4

Update datasets task tags to align tags with models (#4067)

Browse files

* update tasks list

* update tags in dataset cards

* more cards updates

* update dataset tags parser

* fix multi-choice-qa

* style

* small improvements in some dataset cards

* allow certain tag fields to be empty

* update vision datasets tags

* use multi-class-image-classification and remove other tags

Commit from https://github.com/huggingface/datasets/commit/edb4411d4e884690b8b328dba4360dbda6b3cbc8

Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -39,9 +39,8 @@ size_categories:
39
  source_datasets:
40
  - original
41
  task_categories:
42
- - conditional-text-generation
43
- task_ids:
44
- - machine-translation
45
  paperswithcode_id: null
46
  pretty_name: Europa Education and Culture Translation Memory (EAC-TM)
47
  ---
@@ -88,7 +87,7 @@ To load a language pair that is not part of the config, just specify the languag
88
 
89
  ### Supported Tasks and Leaderboards
90
 
91
- - `conditional-text-generation`: the dataset can be used to train a model for `machine-translation`. Machine translation models are usually evaluated using metrics such as [BLEU](https://huggingface.co/metrics/bleu), [ROUGE](https://huggingface.co/metrics/rouge) or [SacreBLEU](https://huggingface.co/metrics/sacrebleu). You can use the [mBART](https://huggingface.co/facebook/mbart-large-cc25) model for this task. This task has active leaderboards which can be found at [https://paperswithcode.com/task/machine-translation](https://paperswithcode.com/task/machine-translation), which usually rank models based on [BLEU score](https://huggingface.co/metrics/bleu).
92
 
93
  ### Languages
94
 
 
39
  source_datasets:
40
  - original
41
  task_categories:
42
+ - translation
43
+ task_ids: []
 
44
  paperswithcode_id: null
45
  pretty_name: Europa Education and Culture Translation Memory (EAC-TM)
46
  ---
 
87
 
88
  ### Supported Tasks and Leaderboards
89
 
90
+ - `text2text-generation`: the dataset can be used to train a model for `machine-translation`. Machine translation models are usually evaluated using metrics such as [BLEU](https://huggingface.co/metrics/bleu), [ROUGE](https://huggingface.co/metrics/rouge) or [SacreBLEU](https://huggingface.co/metrics/sacrebleu). You can use the [mBART](https://huggingface.co/facebook/mbart-large-cc25) model for this task. This task has active leaderboards which can be found at [https://paperswithcode.com/task/machine-translation](https://paperswithcode.com/task/machine-translation), which usually rank models based on [BLEU score](https://huggingface.co/metrics/bleu).
91
 
92
  ### Languages
93