librarian-bot commited on
Commit
ee356e1
·
verified ·
1 Parent(s): 6f7ae04

Librarian Bot: Add language metadata for dataset

Browse files

This pull request aims to enrich the metadata of your dataset by adding language metadata to `YAML` block of your dataset card `README.md`.

How did we find this information?

- The librarian-bot downloaded a sample of rows from your dataset using the [dataset-server](https://huggingface.co/docs/datasets-server/) library
- The librarian-bot used a language detection model to predict the likely language of your dataset. This was done on columns likely to contain text data.
- Predictions for rows are aggregated by language and a filter is applied to remove languages which are very infrequently predicted
- A confidence threshold is applied to remove languages which are not confidently predicted

The following languages were detected with the following mean probabilities:

- Thai (th): 98.10%


If this PR is merged, the language metadata will be added to your dataset card. This will allow users to filter datasets by language on the [Hub](https://huggingface.co/datasets).
If the language metadata is incorrect, please feel free to close this PR.

To merge this PR, you can use the merge button below the PR:
![Screenshot 2024-02-06 at 15.27.46.png](https://cdn-uploads.huggingface.co/production/uploads/63d3e0e8ff1384ce6c5dd17d/1PRE3CoDpg_wfThC6U1w0.png)

This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bots). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to

@davanstrien
.

Files changed (1) hide show
  1. README.md +49 -48
README.md CHANGED
@@ -1,59 +1,60 @@
1
  ---
 
 
2
  dataset_info:
3
  features:
4
  - name: text
5
  dtype: string
6
  - name: label
7
- dtype:
8
  class_label:
9
  names:
10
- "0": 0.125
11
- "1": 0.25
12
- "2": 0.3
13
- "3": 0.4
14
- "4": 0.5
15
- "5": 0.75
16
- "6": 0.8
17
- "7": 1
18
- "8": 1.25
19
- "9": 1.5
20
- "10": 1.75
21
- "11": 2
22
- "12": 2.25
23
- "13": 2.5
24
- "14": 2.75
25
- "15": 3
26
- "16": 3.5
27
- "17": 3.75
28
- "18": 4
29
- "19": 4.5
30
- "20": 5
31
- "21": 6
32
- "22": 7
33
- "23": 7.5
34
- "24": 8
35
- "25": 9
36
- "26": 10
37
- "27": 11
38
- "28": 12
39
- "29": 16
40
- "30": 30
41
- "31": 40
42
- "32": 134.2
43
- "33": 135
44
- "34": 250
45
- "35": 340
46
- "36": 400
47
- "37": 600
48
- "38": 631.6
49
- "39": 700
50
- "40": 900
51
- "41": 1400
52
- "42": 1800
53
- "43": 1894.8
54
- "44": 2000
55
- "45": 3156
56
-
57
  splits:
58
  - name: train
59
  num_bytes: 4309
 
1
  ---
2
+ language:
3
+ - th
4
  dataset_info:
5
  features:
6
  - name: text
7
  dtype: string
8
  - name: label
9
+ dtype:
10
  class_label:
11
  names:
12
+ '0': 0.125
13
+ '1': 0.25
14
+ '2': 0.3
15
+ '3': 0.4
16
+ '4': 0.5
17
+ '5': 0.75
18
+ '6': 0.8
19
+ '7': 1
20
+ '8': 1.25
21
+ '9': 1.5
22
+ '10': 1.75
23
+ '11': 2
24
+ '12': 2.25
25
+ '13': 2.5
26
+ '14': 2.75
27
+ '15': 3
28
+ '16': 3.5
29
+ '17': 3.75
30
+ '18': 4
31
+ '19': 4.5
32
+ '20': 5
33
+ '21': 6
34
+ '22': 7
35
+ '23': 7.5
36
+ '24': 8
37
+ '25': 9
38
+ '26': 10
39
+ '27': 11
40
+ '28': 12
41
+ '29': 16
42
+ '30': 30
43
+ '31': 40
44
+ '32': 134.2
45
+ '33': 135
46
+ '34': 250
47
+ '35': 340
48
+ '36': 400
49
+ '37': 600
50
+ '38': 631.6
51
+ '39': 700
52
+ '40': 900
53
+ '41': 1400
54
+ '42': 1800
55
+ '43': 1894.8
56
+ '44': 2000
57
+ '45': 3156
 
58
  splits:
59
  - name: train
60
  num_bytes: 4309