Blanca commited on
Commit
934b199
1 Parent(s): 6ae9116

Upload 5 files

Browse files
.gitattributes CHANGED
@@ -51,3 +51,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
51
  *.jpg filter=lfs diff=lfs merge=lfs -text
52
  *.jpeg filter=lfs diff=lfs merge=lfs -text
53
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
51
  *.jpg filter=lfs diff=lfs merge=lfs -text
52
  *.jpeg filter=lfs diff=lfs merge=lfs -text
53
  *.webp filter=lfs diff=lfs merge=lfs -text
54
+ es_ancora-ud-train.conllu filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,176 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ YAML tags:
3
+
4
+ annotations_creators:
5
+ - expert-generated
6
+ language:
7
+ - ca
8
+ language_creators:
9
+ - found
10
+ license:
11
+ - cc-by-4.0
12
+ multilinguality:
13
+ - monolingual
14
+ pretty_name: UD_Catalan-AnCora
15
+ size_categories: []
16
+ source_datasets: []
17
+ tags: []
18
+ task_categories:
19
+ - token-classification
20
+ task_ids:
21
+ - part-of-speech
22
+
23
  ---
24
+
25
+
26
+ # UD_Catalan-AnCora
27
+
28
+ ## Table of Contents
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+
54
+ ## Dataset Description
55
+ - **Website:** https://github.com/UniversalDependencies/UD_Catalan-AnCora
56
+ - **Point of Contact:** [Daniel Zeman](zeman@ufal.mff.cuni.cz)
57
+
58
+
59
+ ### Dataset Summary
60
+
61
+ This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
62
+
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ POS tagging
67
+
68
+ ### Languages
69
+
70
+ The dataset is in Catalan (`ca-CA`)
71
+
72
+ ## Dataset Structure
73
+
74
+ ### Data Instances
75
+
76
+ Three conllu files.
77
+
78
+ Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
79
+
80
+ 1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
81
+ 2) Blank lines marking sentence boundaries.
82
+ 3) Comment lines starting with hash (#).
83
+
84
+ ### Data Fields
85
+ Word lines contain the following fields:
86
+
87
+ 1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
88
+ 2) FORM: Word form or punctuation symbol.
89
+ 3) LEMMA: Lemma or stem of word form.
90
+ 4) UPOS: Universal part-of-speech tag.
91
+ 5) XPOS: Language-specific part-of-speech tag; underscore if not available.
92
+ 6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
93
+ 7) HEAD: Head of the current word, which is either a value of ID or zero (0).
94
+ 8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
95
+ 9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
96
+ 10) MISC: Any other annotation.
97
+
98
+ From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html)
99
+
100
+ ### Data Splits
101
+
102
+ - ca_ancora-ud-train.conllu
103
+ - ca_ancora-ud-dev.conllu
104
+ - ca_ancora-ud-test.conllu
105
+
106
+ ## Dataset Creation
107
+
108
+ ### Curation Rationale
109
+ [N/A]
110
+
111
+ ### Source Data
112
+
113
+ - [UD_Catalan-AnCora](https://github.com/UniversalDependencies/UD_Catalan-AnCora)
114
+
115
+ #### Initial Data Collection and Normalization
116
+
117
+ The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
118
+
119
+ For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/).
120
+
121
+ To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org)
122
+
123
+ #### Who are the source language producers?
124
+
125
+ For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/).
126
+
127
+ ### Annotations
128
+
129
+ #### Annotation process
130
+
131
+ For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/).
132
+
133
+ #### Who are the annotators?
134
+
135
+ For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/).
136
+
137
+ ### Personal and Sensitive Information
138
+
139
+ No personal or sensitive information included.
140
+
141
+ ## Considerations for Using the Data
142
+
143
+ ### Social Impact of Dataset
144
+
145
+ This dataset contributes to the development of language models in Catalan, a low-resource language.
146
+
147
+ ### Discussion of Biases
148
+
149
+ [N/A]
150
+
151
+ ### Other Known Limitations
152
+
153
+ [N/A]
154
+
155
+ ## Additional Information
156
+
157
+ ### Dataset Curators
158
+
159
+
160
+
161
+ ### Licensing Information
162
+
163
+ This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
164
+
165
+ ### Citation Information
166
+
167
+ The following paper must be cited when using this corpus:
168
+
169
+ Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
170
+
171
+ To cite the Universal Dependencies project:
172
+
173
+ Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
174
+
175
+
176
+
UD_Spanish-AnCora.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import conllu
2
+
3
+ import datasets
4
+
5
+
6
+ _CITATION = """\
7
+ @misc{11234/1-3424,
8
+ title = {Universal Dependencies 2.7},
9
+ author = {Zeman, Daniel and Nivre, Joakim and Abrams, Mitchell and Ackermann, Elia and Aepli, No{\"e}mi and Aghaei, Hamid and Agi{\'c}, {\v Z}eljko and Ahmadi, Amir and Ahrenberg, Lars and Ajede, Chika Kennedy and Aleksandravi{\v c}i{\=u}t{\.e}, Gabriel{\.e} and Alfina, Ika and Antonsen, Lene and Aplonova, Katya and Aquino, Angelina and Aragon, Carolina and Aranzabe, Maria Jesus and Arnard{\'o}ttir, {\t H}{\'o}runn and Arutie, Gashaw and Arwidarasti, Jessica Naraiswari and Asahara, Masayuki and Ateyah, Luma and Atmaca, Furkan and Attia, Mohammed and Atutxa, Aitziber and Augustinus, Liesbeth and Badmaeva, Elena and Balasubramani, Keerthana and Ballesteros, Miguel and Banerjee, Esha and Bank, Sebastian and Barbu Mititelu, Verginica and Basmov, Victoria and Batchelor, Colin and Bauer, John and Bedir, Seyyit Talha and Bengoetxea, Kepa and Berk, G{\"o}zde and Berzak, Yevgeni and Bhat, Irshad Ahmad and Bhat, Riyaz Ahmad and Biagetti, Erica and Bick, Eckhard and Bielinskien{\.e}, Agn{\.e} and Bjarnad{\'o}ttir, Krist{\'{\i}}n and Blokland, Rogier and Bobicev, Victoria and Boizou, Lo{\"{\i}}c and Borges V{\"o}lker, Emanuel and B{\"o}rstell, Carl and Bosco, Cristina and Bouma, Gosse and Bowman, Sam and Boyd, Adriane and Brokait{\.e}, Kristina and Burchardt, Aljoscha and Candito, Marie and Caron, Bernard and Caron, Gauthier and Cavalcanti, Tatiana and Cebiroglu Eryigit, Gulsen and Cecchini, Flavio Massimiliano and Celano, Giuseppe G. A. and Ceplo, Slavomir and Cetin, Savas and Cetinoglu, Ozlem and Chalub, Fabricio and Chi, Ethan and Cho, Yongseok and Choi, Jinho and Chun, Jayeol and Cignarella, Alessandra T. and Cinkova, Silvie and Collomb, Aurelie and Coltekin, Cagr{\i} and Connor, Miriam and Courtin, Marine and Davidson, Elizabeth and de Marneffe, Marie-Catherine and de Paiva, Valeria and Derin, Mehmet Oguz and de Souza, Elvis and Diaz de Ilarraza, Arantza and Dickerson, Carly and Dinakaramani, Arawinda and Dione, Bamba and Dirix, Peter and Dobrovoljc, Kaja and Dozat, Timothy and Droganova, Kira and Dwivedi, Puneet and Eckhoff, Hanne and Eli, Marhaba and Elkahky, Ali and Ephrem, Binyam and Erina, Olga and Erjavec, Tomaz and Etienne, Aline and Evelyn, Wograine and Facundes, Sidney and Farkas, Rich{\'a}rd and Fernanda, Mar{\'{\i}}lia and Fernandez Alcalde, Hector and Foster, Jennifer and Freitas, Cl{\'a}udia and Fujita, Kazunori and Gajdosov{\'a}, Katar{\'{\i}}na and Galbraith, Daniel and Garcia, Marcos and G{\"a}rdenfors, Moa and Garza, Sebastian and Gerardi, Fabr{\'{\i}}cio Ferraz and Gerdes, Kim and Ginter, Filip and Goenaga, Iakes and Gojenola, Koldo and G{\"o}k{\i}rmak, Memduh and Goldberg, Yoav and G{\'o}mez Guinovart, Xavier and Gonz{\'a}lez Saavedra,
10
+ Berta and Grici{\=u}t{\.e}, Bernadeta and Grioni, Matias and Grobol, Lo{\"{\i}}c and Gr{\=u}z{\={\i}}tis, Normunds and Guillaume, Bruno and Guillot-Barbance, C{\'e}line and G{\"u}ng{\"o}r, Tunga and Habash, Nizar and Hafsteinsson, Hinrik and Haji{\v c}, Jan and Haji{\v c} jr., Jan and H{\"a}m{\"a}l{\"a}inen, Mika and H{\`a} M{\~y}, Linh and Han, Na-Rae and Hanifmuti, Muhammad Yudistira and Hardwick, Sam and Harris, Kim and Haug, Dag and Heinecke, Johannes and Hellwig, Oliver and Hennig, Felix and Hladk{\'a}, Barbora and Hlav{\'a}{\v c}ov{\'a}, Jaroslava and Hociung, Florinel and Hohle, Petter and Huber, Eva and Hwang, Jena and Ikeda, Takumi and Ingason, Anton Karl and Ion, Radu and Irimia, Elena and Ishola, {\d O}l{\'a}j{\'{\i}}d{\'e} and Jel{\'{\i}}nek, Tom{\'a}{\v s} and Johannsen, Anders and J{\'o}nsd{\'o}ttir, Hildur and J{\o}rgensen, Fredrik and Juutinen, Markus and K, Sarveswaran and Ka{\c s}{\i}kara, H{\"u}ner and Kaasen, Andre and Kabaeva, Nadezhda and Kahane, Sylvain and Kanayama, Hiroshi and Kanerva, Jenna and Katz, Boris and Kayadelen, Tolga and Kenney, Jessica and Kettnerov{\'a}, V{\'a}clava and Kirchner, Jesse and Klementieva, Elena and K{\"o}hn, Arne and K{\"o}ksal, Abdullatif and Kopacewicz, Kamil and Korkiakangas, Timo and Kotsyba, Natalia and Kovalevskait{\.e}, Jolanta and Krek, Simon and Krishnamurthy, Parameswari and Kwak, Sookyoung and Laippala, Veronika and Lam, Lucia and Lambertino, Lorenzo and Lando, Tatiana and Larasati, Septina Dian and Lavrentiev, Alexei and Lee, John and L{\^e} H{\`{\^o}}ng, Phương and Lenci, Alessandro and Lertpradit, Saran and Leung, Herman and Levina, Maria and Li, Cheuk Ying and Li, Josie and Li, Keying and Li, Yuan and Lim, {KyungTae} and Linden, Krister and Ljubesic, Nikola and Loginova, Olga and Luthfi, Andry and Luukko, Mikko and Lyashevskaya, Olga and Lynn, Teresa and Macketanz, Vivien and Makazhanov, Aibek and Mandl, Michael and Manning, Christopher and Manurung, Ruli and Maranduc, Catalina and Marcek, David and Marheinecke, Katrin and Mart{\'{\i}}nez Alonso, H{\'e}ctor and Martins, Andr{\'e} and Masek, Jan and Matsuda, Hiroshi and Matsumoto, Yuji and {McDonald}, Ryan and {McGuinness}, Sarah and Mendonca, Gustavo and Miekka, Niko and Mischenkova, Karina and Misirpashayeva, Margarita and Missil{\"a}, Anna and Mititelu, Catalin and Mitrofan, Maria and Miyao, Yusuke and Mojiri Foroushani, {AmirHossein} and Moloodi, Amirsaeid and Montemagni, Simonetta and More, Amir and Moreno Romero, Laura and Mori, Keiko Sophie and Mori, Shinsuke and Morioka, Tomohiko and Moro, Shigeki and Mortensen, Bjartur and Moskalevskyi, Bohdan and Muischnek, Kadri and Munro, Robert and Murawaki, Yugo and M{\"u}{\"u}risep, Kaili and Nainwani, Pinkey and Nakhl{\'e}, Mariam and Navarro Hor{\~n}iacek, Juan Ignacio and Nedoluzhko,
11
+ Anna and Ne{\v s}pore-B{\=e}rzkalne, Gunta and Nguy{\~{\^e}}n Th{\d i}, Lương and Nguy{\~{\^e}}n Th{\d i} Minh, Huy{\`{\^e}}n and Nikaido, Yoshihiro and Nikolaev, Vitaly and Nitisaroj, Rattima and Nourian, Alireza and Nurmi, Hanna and Ojala, Stina and Ojha, Atul Kr. and Ol{\'u}{\`o}kun, Ad{\'e}day{\d o}̀ and Omura, Mai and Onwuegbuzia, Emeka and Osenova, Petya and {\"O}stling, Robert and {\O}vrelid, Lilja and {\"O}zate{\c s}, {\c S}aziye Bet{\"u}l and {\"O}zg{\"u}r, Arzucan and {\"O}zt{\"u}rk Ba{\c s}aran, Balk{\i}z and Partanen, Niko and Pascual, Elena and Passarotti, Marco and Patejuk, Agnieszka and Paulino-Passos, Guilherme and Peljak-{\L}api{\'n}ska, Angelika and Peng, Siyao and Perez, Cenel-Augusto and Perkova, Natalia and Perrier, Guy and Petrov, Slav and Petrova, Daria and Phelan, Jason and Piitulainen, Jussi and Pirinen, Tommi A and Pitler, Emily and Plank, Barbara and Poibeau, Thierry and Ponomareva, Larisa and Popel, Martin and Pretkalnina, Lauma and Pr{\'e}vost, Sophie and Prokopidis, Prokopis and Przepi{\'o}rkowski, Adam and Puolakainen, Tiina and Pyysalo, Sampo and Qi, Peng and R{\"a}{\"a}bis, Andriela and Rademaker, Alexandre and Rama, Taraka and Ramasamy, Loganathan and Ramisch, Carlos and Rashel, Fam and Rasooli, Mohammad Sadegh and Ravishankar, Vinit and Real, Livy and Rebeja, Petru and Reddy, Siva and Rehm, Georg and Riabov, Ivan and Rie{\ss}ler, Michael and Rimkut{\.e}, Erika and Rinaldi, Larissa and Rituma, Laura and Rocha, Luisa and R{\"o}gnvaldsson, Eir{\'{\i}}kur and Romanenko, Mykhailo and Rosa, Rudolf and Roșca, Valentin and Rovati, Davide and Rudina, Olga and Rueter, Jack and R{\'u}narsson, Kristjan and Sadde, Shoval and Safari, Pegah and Sagot, Benoit and Sahala, Aleksi and Saleh, Shadi and Salomoni, Alessio and Samardzi{\'c}, Tanja and Samson, Stephanie and Sanguinetti, Manuela and S{\"a}rg,
12
+ Dage and Saul{\={\i}}te, Baiba and Sawanakunanon, Yanin and Scannell, Kevin and Scarlata, Salvatore and Schneider, Nathan and Schuster, Sebastian and Seddah, Djam{\'e} and Seeker, Wolfgang and Seraji, Mojgan and Shen, Mo and Shimada, Atsuko and Shirasu, Hiroyuki and Shohibussirri, Muh and Sichinava, Dmitry and Sigurðsson, Einar Freyr and Silveira, Aline and Silveira, Natalia and Simi, Maria and Simionescu, Radu and Simk{\'o}, Katalin and {\v S}imkov{\'a}, M{\'a}ria and Simov, Kiril and Skachedubova, Maria and Smith, Aaron and Soares-Bastos, Isabela and Spadine, Carolyn and Steingr{\'{\i}}msson, Stein{\t h}{\'o}r and Stella, Antonio and Straka, Milan and Strickland, Emmett and Strnadov{\'a}, Jana and Suhr, Alane and Sulestio, Yogi Lesmana and Sulubacak, Umut and Suzuki, Shingo and Sz{\'a}nt{\'o}, Zsolt and Taji, Dima and Takahashi, Yuta and Tamburini, Fabio and Tan, Mary Ann C. and Tanaka, Takaaki and Tella, Samson and Tellier, Isabelle and Thomas, Guillaume and Torga, Liisi and Toska, Marsida and Trosterud, Trond and Trukhina, Anna and Tsarfaty, Reut and T{\"u}rk, Utku and Tyers, Francis and Uematsu, Sumire and Untilov, Roman and Uresov{\'a}, Zdenka and Uria, Larraitz and Uszkoreit, Hans and Utka, Andrius and Vajjala, Sowmya and van Niekerk, Daniel and van Noord, Gertjan and Varga, Viktor and Villemonte de la Clergerie, Eric and Vincze, Veronika and Wakasa, Aya and Wallenberg, Joel C. and Wallin, Lars and Walsh, Abigail and Wang, Jing Xian and Washington, Jonathan North and Wendt, Maximilan and Widmer, Paul and Williams, Seyi and Wir{\'e}n, Mats and Wittern, Christian and Woldemariam, Tsegay and Wong, Tak-sum and Wr{\'o}blewska, Alina and Yako, Mary and Yamashita, Kayo and Yamazaki, Naoki and Yan, Chunxiao and Yasuoka, Koichi and Yavrumyan, Marat M. and Yu, Zhuoran and Zabokrtsk{\'y}, Zdenek and Zahra, Shorouq and Zeldes, Amir and Zhu, Hanzhi and Zhuravleva, Anna},
13
+ url = {http://hdl.handle.net/11234/1-3424},
14
+ note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
15
+ copyright = {Licence Universal Dependencies v2.7},
16
+ year = {2020} }
17
+ """ # noqa: W605
18
+
19
+ _DESCRIPTION = """\
20
+ Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal) Stanford dependencies (de Marneffe et al., 2006, 2008, 2014), Google universal part-of-speech tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets (Zeman, 2008).
21
+ """
22
+
23
+ _NAMES = [
24
+ "ca_ancora",
25
+ ]
26
+
27
+ _DESCRIPTIONS = {
28
+ "ca_ancora": "Catalan data from the AnCora corpus.",
29
+ }
30
+ _PREFIX = "./"
31
+ _UD_DATASETS = {
32
+ "ca_ancora": {
33
+ "train": "ca_ancora-ud-train.conllu",
34
+ "dev": "ca_ancora-ud-dev.conllu",
35
+ "test": "ca_ancora-ud-test.conllu",
36
+ },
37
+ }
38
+
39
+
40
+ class UniversaldependenciesConfig(datasets.BuilderConfig):
41
+ """BuilderConfig for Universal dependencies"""
42
+
43
+ def __init__(self, data_url, **kwargs):
44
+ super(UniversaldependenciesConfig, self).__init__(version=datasets.Version("2.7.0", ""), **kwargs)
45
+
46
+ self.data_url = data_url
47
+
48
+
49
+ class UniversalDependencies(datasets.GeneratorBasedBuilder):
50
+ VERSION = datasets.Version("2.7.0")
51
+ BUILDER_CONFIGS = [
52
+ UniversaldependenciesConfig(
53
+ name=name,
54
+ description=_DESCRIPTIONS[name],
55
+ data_url="https://github.com/UniversalDependencies/" + _UD_DATASETS[name]["test"].split("/")[0],
56
+ )
57
+ for name in _NAMES
58
+ ]
59
+ BUILDER_CONFIG_CLASS = UniversaldependenciesConfig
60
+
61
+ def _info(self):
62
+ return datasets.DatasetInfo(
63
+ description=_DESCRIPTION,
64
+ features=datasets.Features(
65
+ {
66
+ "idx": datasets.Value("string"),
67
+ "text": datasets.Value("string"),
68
+ "tokens": datasets.Sequence(datasets.Value("string")),
69
+ "lemmas": datasets.Sequence(datasets.Value("string")),
70
+ "upos_tags": datasets.Sequence(
71
+ datasets.features.ClassLabel(
72
+ names=[
73
+ "NOUN",
74
+ "PUNCT",
75
+ "ADP",
76
+ "NUM",
77
+ "SYM",
78
+ "SCONJ",
79
+ "ADJ",
80
+ "PART",
81
+ "DET",
82
+ "CCONJ",
83
+ "PROPN",
84
+ "PRON",
85
+ "X",
86
+ "_",
87
+ "ADV",
88
+ "INTJ",
89
+ "VERB",
90
+ "AUX",
91
+ ]
92
+ )
93
+ ),
94
+ "xpos_tags": datasets.Sequence(datasets.Value("string")),
95
+ "feats": datasets.Sequence(datasets.Value("string")),
96
+ "head": datasets.Sequence(datasets.Value("string")),
97
+ "deprel": datasets.Sequence(datasets.Value("string")),
98
+ "deps": datasets.Sequence(datasets.Value("string")),
99
+ "misc": datasets.Sequence(datasets.Value("string")),
100
+ }
101
+ ),
102
+ supervised_keys=None,
103
+ # homepage="https://universaldependencies.org/",
104
+ citation=_CITATION,
105
+ )
106
+
107
+ def _split_generators(self, dl_manager):
108
+ """Returns SplitGenerators."""
109
+ urls_to_download = {}
110
+ for split, address in _UD_DATASETS["ca_ancora"].items():
111
+ urls_to_download[split] = []
112
+ if isinstance(address, list):
113
+ for add in address:
114
+ urls_to_download[split].append(_PREFIX + add)
115
+ else:
116
+ urls_to_download[split].append(_PREFIX + address)
117
+
118
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
119
+ splits = []
120
+
121
+ if "train" in downloaded_files:
122
+ splits.append(
123
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})
124
+ )
125
+
126
+ if "dev" in downloaded_files:
127
+ splits.append(
128
+ datasets.SplitGenerator(
129
+ name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}
130
+ )
131
+ )
132
+
133
+ if "test" in downloaded_files:
134
+ splits.append(
135
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]})
136
+ )
137
+
138
+ return splits
139
+
140
+ def _generate_examples(self, filepath):
141
+ id = 0
142
+ for path in filepath:
143
+ with open(path, "r", encoding="utf-8") as data_file:
144
+ tokenlist = list(conllu.parse_incr(data_file))
145
+ for sent in tokenlist:
146
+ if "sent_id" in sent.metadata:
147
+ idx = sent.metadata["sent_id"]
148
+ else:
149
+ idx = id
150
+
151
+ tokens = [token["form"] for token in sent]
152
+
153
+ if "text" in sent.metadata:
154
+ txt = sent.metadata["text"]
155
+ else:
156
+ txt = " ".join(tokens)
157
+
158
+ yield id, {
159
+ "idx": str(idx),
160
+ "text": txt,
161
+ "tokens": [token["form"] for token in sent],
162
+ "lemmas": [token["lemma"] for token in sent],
163
+ "upos_tags": [token["upos"] for token in sent],
164
+ "xpos_tags": [token["xpos"] for token in sent],
165
+ "feats": [str(token["feats"]) for token in sent],
166
+ "head": [str(token["head"]) for token in sent],
167
+ "deprel": [str(token["deprel"]) for token in sent],
168
+ "deps": [str(token["deps"]) for token in sent],
169
+ "misc": [str(token["misc"]) for token in sent],
170
+ }
171
+ id += 1
172
+
es_ancora-ud-dev.conllu ADDED
The diff for this file is too large to render. See raw diff
 
es_ancora-ud-test.conllu ADDED
The diff for this file is too large to render. See raw diff
 
es_ancora-ud-train.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8f91ab940e9e56609893469e0d39bc4c53ffcfd72bbfc0e114bb8ac6aa440b6
3
+ size 42516504