cointegrated commited on
Commit
afd673c
1 Parent(s): 4779ab8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -62
README.md CHANGED
@@ -12,46 +12,60 @@ language: ["aar", "abe", "abk", "abq", "abt", "abz", "act", "acu", "acw", "ady",
12
 
13
  # Dataset Card for panlex-meanings
14
 
15
- This is a dataset of expressions (words and phrases) in several thousand languages, extracted from https://panlex.org (the `20240301` database dump).
16
- Each expression is associated with some meanings (if there is more than one meaning, they are in separate rows).
17
- Thus, by joining per-language datasets by meaning ids, one can obtain a bilingual dictionary.
18
 
19
  ## Dataset Details
20
 
21
  ### Dataset Description
22
 
23
- <!-- Provide a longer summary of what this dataset is. -->
24
 
 
 
25
 
 
26
 
27
- - **Curated by:** [More Information Needed]
28
  - **Language(s) (NLP):** The Panlex database mentions 7558 languages, but only 6241 of them have at least one entry (where entry is a combination of expression and meaning),
29
- and only 1012 have at least 1000 entries. These 1012 languages are listed in the current dataset.
30
- - **License:** [More Information Needed]
31
 
32
  ### Dataset Sources [optional]
33
 
34
  <!-- Provide the basic links for the dataset. -->
35
 
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
-
40
  ## Uses
41
 
42
- <!-- Address questions around how the dataset is intended to be used. -->
43
 
44
- ### Direct Use
45
 
46
- <!-- This section describes suitable use cases for the dataset. -->
 
 
 
 
 
47
 
48
- [More Information Needed]
 
 
49
 
50
- ### Out-of-Scope Use
 
 
 
 
 
 
 
51
 
52
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
53
 
54
- [More Information Needed]
55
 
56
  ## Dataset Structure
57
 
@@ -59,6 +73,8 @@ Thus, by joining per-language datasets by meaning ids, one can obtain a bilingua
59
 
60
  The dataset is split by languages, denoted by their ISO 639 codes. Each language might contain multiple varieties; they are annotated within each per-language split.
61
 
 
 
62
  Each split contains the following fields:
63
  - `id` (int): id of the expression
64
  - `langvar` (int): id of the language variaety
@@ -67,53 +83,10 @@ Each split contains the following fields:
67
  - `meaning` (int): id of the meaning. This is the column to join for obtaining synonyms (within a language) or translations (across languages)
68
  - `langvar_uid` (str): more human-readable id of the language (e.g. `eng-000` stands for generic English, `eng-001` for simple English, `eng-004` for American English). These ids could be looked up in the language dropdown at https://vocab.panlex.org/.
69
 
70
- [More Information Needed]
71
 
72
  ## Dataset Creation
73
 
74
- ### Curation Rationale
75
-
76
- <!-- Motivation for the creation of this dataset. -->
77
-
78
- [More Information Needed]
79
-
80
- ### Source Data
81
-
82
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
83
-
84
- #### Data Collection and Processing
85
-
86
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
87
-
88
- [More Information Needed]
89
-
90
- #### Who are the source data producers?
91
-
92
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
93
-
94
- [More Information Needed]
95
-
96
- ### Annotations [optional]
97
-
98
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
99
-
100
- #### Annotation process
101
-
102
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
103
-
104
- [More Information Needed]
105
-
106
- #### Who are the annotators?
107
-
108
- <!-- This section describes the people or systems who created the annotations. -->
109
-
110
- [More Information Needed]
111
-
112
- #### Personal and Sensitive Information
113
-
114
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
115
-
116
- [More Information Needed]
117
 
118
  ## Bias, Risks, and Limitations
119
 
 
12
 
13
  # Dataset Card for panlex-meanings
14
 
15
+ This is a dataset of words in several thousand languages, extracted from https://panlex.org.
 
 
16
 
17
  ## Dataset Details
18
 
19
  ### Dataset Description
20
 
21
+ This dataset has been extracted from https://panlex.org (the `20240301` database dump) and rearranged on the per-language basis.
22
 
23
+ Each language subset consists of expressions (words and phrases).
24
+ Each expression is associated with some meanings (if there is more than one meaning, they are in separate rows).
25
 
26
+ Thus, by joining per-language datasets by meaning ids, one can obtain a bilingual dictionary for the chosen language pair.
27
 
28
+ - **Curated by:** David Dale (@cointegrated), based on a snapshot of the Panlex database (https://panlex.org/snapshot).
29
  - **Language(s) (NLP):** The Panlex database mentions 7558 languages, but only 6241 of them have at least one entry (where entry is a combination of expression and meaning),
30
+ and only 1012 have at least 1000 entries. These 1012 languages are tagged in the current dataset.
31
+ - **License:** [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/), as explained in https://panlex.org/license.
32
 
33
  ### Dataset Sources [optional]
34
 
35
  <!-- Provide the basic links for the dataset. -->
36
 
37
+ - **Original website:** https://panlex.org/
38
+ - **Paper:** Kamholz, David, Jonathan Pool, and Susan M. Colowick. 2014. [PanLex: Building a Resource for Panlingual Lexical Translation](http://www.lrec-conf.org/proceedings/lrec2014/pdf/1029_Paper.pdf). Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014).
39
+
 
40
  ## Uses
41
 
42
+ The intended use of the dataset is to extract bilingual dictionaries for the purposes of language learning by machines or humans.
43
 
44
+ The code below illustrates how the dataset could be used to extract a bilingual Avar-English dictionary.
45
 
46
+ ```Python
47
+ from datasets import load_dataset
48
+ ds_ava = load_dataset('cointegrated/panlex-meanings', name='ava', split='train')
49
+ ds_eng = load_dataset('cointegrated/panlex-meanings', name='eng', split='train')
50
+ df_ava = ds_ava.to_pandas()
51
+ df_eng = ds_eng.to_pandas()
52
 
53
+ df_ava_eng = df_ava.merge(df_eng, on='meaning', suffixes=['_ava', '_eng']).drop_duplicates(subset=['txt_ava', 'txt_eng'])
54
+ print(df_ava_eng.shape)
55
+ # (10565, 11)
56
 
57
+ print(df_ava_eng.sample(5)[['txt_ava', 'txt_eng', 'langvar_uid_ava']])
58
+ # txt_ava txt_eng langvar_uid_ava
59
+ # 7921 калим rug ava-002
60
+ # 3279 хІераб old ava-001
61
+ # 41 бакьулълъи middle ava-000
62
+ # 9542 шумаш nose ava-006
63
+ # 15030 гӏащтӏи axe ava-000
64
+ ```
65
 
66
+ Apart from these direct translations, one could also try extracting multi-hop translations (e.g. enrich the direct Avar-English word pairs with the word pairs that share a common Russian translation).
67
+ However, given that many words have multiple meaning, this approach usually generates some false translations, so it should be used with caution.
68
 
 
69
 
70
  ## Dataset Structure
71
 
 
73
 
74
  The dataset is split by languages, denoted by their ISO 639 codes. Each language might contain multiple varieties; they are annotated within each per-language split.
75
 
76
+ To determine a code for your language, please consult the https://panlex.org webside. For additional information about a language, you may also want to consult https://glottolog.org/.
77
+
78
  Each split contains the following fields:
79
  - `id` (int): id of the expression
80
  - `langvar` (int): id of the language variaety
 
83
  - `meaning` (int): id of the meaning. This is the column to join for obtaining synonyms (within a language) or translations (across languages)
84
  - `langvar_uid` (str): more human-readable id of the language (e.g. `eng-000` stands for generic English, `eng-001` for simple English, `eng-004` for American English). These ids could be looked up in the language dropdown at https://vocab.panlex.org/.
85
 
 
86
 
87
  ## Dataset Creation
88
 
89
+ This dataset has been extracted from https://panlex.org (the `20240301` database dump) and automatically rearranged on the per-language basis.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  ## Bias, Risks, and Limitations
92