librarian-bot commited on
Commit
84e7b46
·
verified ·
1 Parent(s): 57a91d1

Librarian Bot: Add language metadata for dataset

Browse files

This pull request aims to enrich the metadata of your dataset by adding language metadata to `YAML` block of your dataset card `README.md`.

How did we find this information?

- The librarian-bot downloaded a sample of rows from your dataset using the [dataset-server](https://huggingface.co/docs/datasets-server/) library
- The librarian-bot used a language detection model to predict the likely language of your dataset. This was done on columns likely to contain text data.
- Predictions for rows are aggregated by language and a filter is applied to remove languages which are very infrequently predicted
- A confidence threshold is applied to remove languages which are not confidently predicted

The following languages were detected with the following mean probabilities:

- English (en): 86.47%


If this PR is merged, the language metadata will be added to your dataset card. This will allow users to filter datasets by language on the [Hub](https://huggingface.co/datasets).
If the language metadata is incorrect, please feel free to close this PR.

To merge this PR, you can use the merge button below the PR:
![Screenshot 2024-02-06 at 15.27.46.png](https://cdn-uploads.huggingface.co/production/uploads/63d3e0e8ff1384ce6c5dd17d/1PRE3CoDpg_wfThC6U1w0.png)

This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bots). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to

@davanstrien
.

Files changed (1) hide show
  1. README.md +17 -15
README.md CHANGED
@@ -1,39 +1,41 @@
1
  ---
 
 
2
  license: apache-2.0
3
  configs:
4
  - config_name: data_records
5
  data_files:
6
  - split: train
7
  path:
8
- - "data.parquet"
9
  - split: dev
10
- path:
11
- - "data.parquet"
12
  - split: test
13
- path:
14
- - "data.parquet"
15
  - config_name: qs
16
  data_files:
17
  - split: train
18
  path:
19
- - "train/qs.parquet"
20
  - split: dev
21
- path:
22
- - "dev/qs.parquet"
23
  - split: test
24
- path:
25
- - "test/qs.parquet"
26
  - config_name: qs_rel
27
  data_files:
28
  - split: train
29
  path:
30
- - "train/qs_rel.parquet"
31
  - split: dev
32
- path:
33
- - "dev/qs_rel.parquet"
34
  - split: test
35
- path:
36
- - "test/qs_rel.parquet"
37
  ---
38
 
39
  The dataset contains a random 0.7/0.1/0.2 train/dev/test splits of nq dataset from KILT https://github.com/facebookresearch/KILT for benchmarking embedding model fine-tuning.
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
  configs:
6
  - config_name: data_records
7
  data_files:
8
  - split: train
9
  path:
10
+ - data.parquet
11
  - split: dev
12
+ path:
13
+ - data.parquet
14
  - split: test
15
+ path:
16
+ - data.parquet
17
  - config_name: qs
18
  data_files:
19
  - split: train
20
  path:
21
+ - train/qs.parquet
22
  - split: dev
23
+ path:
24
+ - dev/qs.parquet
25
  - split: test
26
+ path:
27
+ - test/qs.parquet
28
  - config_name: qs_rel
29
  data_files:
30
  - split: train
31
  path:
32
+ - train/qs_rel.parquet
33
  - split: dev
34
+ path:
35
+ - dev/qs_rel.parquet
36
  - split: test
37
+ path:
38
+ - test/qs_rel.parquet
39
  ---
40
 
41
  The dataset contains a random 0.7/0.1/0.2 train/dev/test splits of nq dataset from KILT https://github.com/facebookresearch/KILT for benchmarking embedding model fine-tuning.