lhoestq HF staff commited on
Commit
8f02fe2
1 Parent(s): bada9ae

Update datasets task tags to align tags with models (#4067)

Browse files

* update tasks list

* update tags in dataset cards

* more cards updates

* update dataset tags parser

* fix multi-choice-qa

* style

* small improvements in some dataset cards

* allow certain tag fields to be empty

* update vision datasets tags

* use multi-class-image-classification and remove other tags

Commit from https://github.com/huggingface/datasets/commit/edb4411d4e884690b8b328dba4360dbda6b3cbc8

Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -23,9 +23,10 @@ size_categories:
23
  source_datasets:
24
  - original
25
  task_categories:
26
- - speech-processing
27
- task_ids:
28
  - automatic-speech-recognition
 
 
 
29
  ---
30
 
31
  # Dataset Card for MultiLingual LibriSpeech
@@ -67,7 +68,7 @@ Multilingual LibriSpeech (MLS) dataset is a large multilingual corpus suitable f
67
 
68
  ### Supported Tasks and Leaderboards
69
 
70
- - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
71
 
72
  ### Languages
73
 
 
23
  source_datasets:
24
  - original
25
  task_categories:
 
 
26
  - automatic-speech-recognition
27
+ - audio-classification
28
+ task_ids:
29
+ - audio-speaker-identification
30
  ---
31
 
32
  # Dataset Card for MultiLingual LibriSpeech
 
68
 
69
  ### Supported Tasks and Leaderboards
70
 
71
+ - `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
72
 
73
  ### Languages
74