system HF staff commited on
Commit
f0dce71
1 Parent(s): 5881be2

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  ---
3
 
4
  # Dataset Card for "ted_multi"
@@ -6,12 +7,12 @@
6
  ## Table of Contents
7
  - [Dataset Description](#dataset-description)
8
  - [Dataset Summary](#dataset-summary)
9
- - [Supported Tasks](#supported-tasks)
10
  - [Languages](#languages)
11
  - [Dataset Structure](#dataset-structure)
12
  - [Data Instances](#data-instances)
13
  - [Data Fields](#data-fields)
14
- - [Data Splits Sample Size](#data-splits-sample-size)
15
  - [Dataset Creation](#dataset-creation)
16
  - [Curation Rationale](#curation-rationale)
17
  - [Source Data](#source-data)
@@ -43,7 +44,7 @@ Massively multilingual (60 language) data set derived from TED Talk transcripts.
43
  Each record consists of parallel arrays of language and text. Missing and
44
  incomplete translations will be filtered out.
45
 
46
- ### Supported Tasks
47
 
48
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
 
@@ -81,7 +82,7 @@ The data fields are the same among all splits.
81
  - `translations`: a multilingual `string` variable, with possible languages including `ar`, `az`, `be`, `bg`, `bn`.
82
  - `talk_name`: a `string` feature.
83
 
84
- ### Data Splits Sample Size
85
 
86
  | name |train |validation|test|
87
  |----------|-----:|---------:|---:|
@@ -95,10 +96,22 @@ The data fields are the same among all splits.
95
 
96
  ### Source Data
97
 
 
 
 
 
 
 
98
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
 
100
  ### Annotations
101
 
 
 
 
 
 
 
102
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
 
104
  ### Personal and Sensitive Information
1
  ---
2
+ paperswithcode_id: null
3
  ---
4
 
5
  # Dataset Card for "ted_multi"
7
  ## Table of Contents
8
  - [Dataset Description](#dataset-description)
9
  - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
  - [Languages](#languages)
12
  - [Dataset Structure](#dataset-structure)
13
  - [Data Instances](#data-instances)
14
  - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
  - [Dataset Creation](#dataset-creation)
17
  - [Curation Rationale](#curation-rationale)
18
  - [Source Data](#source-data)
44
  Each record consists of parallel arrays of language and text. Missing and
45
  incomplete translations will be filtered out.
46
 
47
+ ### Supported Tasks and Leaderboards
48
 
49
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
50
 
82
  - `translations`: a multilingual `string` variable, with possible languages including `ar`, `az`, `be`, `bg`, `bn`.
83
  - `talk_name`: a `string` feature.
84
 
85
+ ### Data Splits
86
 
87
  | name |train |validation|test|
88
  |----------|-----:|---------:|---:|
96
 
97
  ### Source Data
98
 
99
+ #### Initial Data Collection and Normalization
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ #### Who are the source language producers?
104
+
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
  ### Annotations
108
 
109
+ #### Annotation process
110
+
111
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
+
113
+ #### Who are the annotators?
114
+
115
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
 
117
  ### Personal and Sensitive Information