jsaizant commited on
Commit
46b1b1f
1 Parent(s): 4c4eb56

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +142 -0
  2. test.jsonl +0 -0
  3. topic_list.csv +43 -0
  4. train.jsonl +0 -0
  5. validation.jsonl +0 -0
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - question-answering
5
+ - summarization
6
+ - text-generation
7
+ language:
8
+ - es
9
+ pretty_name: Mentor_ES
10
+ size_categories:
11
+ - 1K<n<10K
12
  ---
13
+
14
+ ## Dataset Description
15
+
16
+ - **Homepage:** [Projecte AINA](https://projecteaina.cat/tech/)
17
+ - **Repository:** [HuggingFace](https://huggingface.co/projecte-aina)
18
+ - **Paper:** N/A
19
+ - **Leaderboard:** N/A
20
+ - **Point of Contact:** langtech@bsc.es
21
+
22
+ ### Dataset Summary
23
+
24
+ Mentor_ES is an open source dataset of 10,175 instructions in Spanish organized in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.
25
+
26
+ ### Supported Tasks and Leaderboards
27
+
28
+ Useful for fine-tuning instructions in large language models for downstream tasks.
29
+
30
+ ### Languages
31
+
32
+ This dataset is in Spanish (es-ES).
33
+
34
+ ## Dataset Structure
35
+
36
+ ### Data Instances
37
+
38
+ The dataset is provided in JSON format, with the same fields as in the [Dolly databricks dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k), where each records corresponds to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.
39
+
40
+ | category | instruction | context | response |
41
+ |-----------|-------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|
42
+ | closed_qa | ¿Por qué motivo evolucionó la mosca tsetsé en África? | Los suelos son excepcionalmente ricos en minerales y muy aptos para pastos. Debido al clima es allí donde evolucionó la mosca tsetsé y donde prolifera actualmente. | La mosca tsetsé evolucionó debido al clima. |
43
+
44
+ ### Data Fields
45
+
46
+ - `category`: text string containing the type of instruction.
47
+ - `instruction`: text string containing the prompt.
48
+ - `context`: text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.
49
+ - `answer`: text string containing the response to the instruction.
50
+
51
+ ### Data Splits
52
+
53
+ We do not provide canonical splits for Mentor_ES other than the categories used for generating the dataset.
54
+
55
+ | Category | Number of instructions |
56
+ |----------------|------------------|
57
+ | Open_QA | 2500 |
58
+ | General_QA | 1500 |
59
+ | Classification | 1450 |
60
+ | Closed_QA | 1250 |
61
+ | Brainstorming | 1200 |
62
+ | Information_extraction | 1000 |
63
+ | Summarization | 800 |
64
+ | Creative_writing | 475 |
65
+
66
+ ## Dataset Creation
67
+
68
+ ### Curation Rationale
69
+
70
+ Mentor_ES is an open-source dataset of 10,175 records to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the `context` field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the [topic list file](https://huggingface.co/datasets/projecte-aina/MENTOR_ES/blob/main/topic_list.csv). Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.
71
+
72
+ ### Source Data
73
+
74
+ - **Human-generated data**: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.
75
+ - **Web**: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.
76
+
77
+ #### Initial Data Collection and Normalization
78
+
79
+ To create the dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.
80
+
81
+ The guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.
82
+
83
+ #### Who are the source language producers?
84
+
85
+ The data was generated entirely by native Spanish annotators. Text obtained from the web for the `context` field was kept as is, while the `response` field was rewritten.
86
+
87
+ ### Annotations
88
+
89
+ The annotation guidelines for each of the categories are as follows:
90
+
91
+ - **Closed QA** (closed_qa): Questions that can only be answered from a reference text. The annotators must provide a text from any web page and ask a question whose answer is found in the text.
92
+ - **Open QA** (open_qa): Questions of common culture that can be answered without consulting any source or with a simple search on the Internet.
93
+ - **General QA** (general_qa): Questions that are very general and do not necessarily have to be objective. In fact, it is desirable that they be as subjective as possible.
94
+ - **Classification** (classification): Questions that serve to obtain classifications or categorizations of a list of items in different categories to which they may belong.
95
+ - **Information Extraction** (inf_ext): Questions used to extract a list of data or information from a reference text.
96
+ - **Summarization** (summarization): Questions to ask for a summary or synthesis of a text provided by the annotator.
97
+ - **Creative Writing** (creative_wr): Questions that should be order-oriented to obtain an original text (a story, a letter, a song, an article, a poem, a narrative, etc.). original text (a story, a letter, a song, an article, a poem, a narrative, etc.).
98
+ - **Brainstorming** (brainstorming): Questions to obtain a list of ideas or possible options to an issue.
99
+
100
+ #### Annotation process
101
+
102
+ The annotators were divided into two groups, with one group collecting reference text and asking a question, and the other group providing a response to the instruction.
103
+
104
+ #### Who are the annotators?
105
+
106
+ While labels and text were produced by humans, no further information about the people or systems involved was provided when creating this resource.
107
+
108
+ ### Personal and Sensitive Information
109
+
110
+ This dataset contains public information (e.g., some information from the web). To our knowledge, there are no private person’s personal identifiers or sensitive information.
111
+
112
+ ## Considerations for Using the Data
113
+
114
+ ### Social Impact of Dataset
115
+
116
+ [N/A]
117
+
118
+ ### Discussion of Biases
119
+
120
+ [N/A]
121
+
122
+ ### Other Known Limitations
123
+
124
+ - The contents of this dataset may reflect the bias, factual errors and topical focus found in the web.
125
+ - Annotator demographics and subject matter may reflect the makeup of the annotators.
126
+
127
+ ## Additional Information
128
+
129
+ ### Dataset Curators
130
+
131
+ Language Technologies Unit (langtech@bsc.es) at the Barcelona Supercomputing Center (BSC).
132
+
133
+ This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/) within the framework of [Projecte AINA](https://projecteaina.cat/tech/).
134
+
135
+ ### Licensing Information
136
+
137
+ This dataset can be used for any purpose, whether academic or commercial, under the terms of the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). Give appropriate credit , provide a link to the license, and indicate if changes were made.
138
+
139
+ ### Citation Information
140
+
141
+ [N/A]
142
+
143
+ ### Contributions
144
+
145
+ [N/A]
test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
topic_list.csv ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Antropología,"Preguntas sobre el ser humano, culturas, autores y ciencia antropológica."
2
+ Arqueología,"Ciencia arqueológica"
3
+ Arquitectura,"Edificación, construcción, arquitectos, monumentos, materiales..."
4
+ Artes Escénicas,"Teatro, circo, musicales, ópera (danza excluida)."
5
+ Astrología,"Horóscopo y zodiaco de cualquier cultura, espiritualidad, tarot, parapsicología..."
6
+ Biología,"Preguntas genéricas de biología general"
7
+ Botánica,"Plantas, jardinería, horticultura..."
8
+ Bricolaje,"Lampistería, fontanería, carpintería..."
9
+ Cine Y Series,"Películas, series, programas de televisión, actuación, dirección, audiovisuales, premios..."
10
+ Cocina,"Recetas, nutrición, coctelería, alimentación, ciencias alimenticias..."
11
+ Cultura Popular,"Temas de la farándula, prensa rosa, realities, influencers, etc."
12
+ Danza,"Baile y danza."
13
+ Deportes,"Deporte de cualquier tipo, reglas, material, atletas, competiciones, concursos, pruebas deportivas, premios..."
14
+ Derecho,"Preguntas diversas sobre ciencias jurídicas, leyes, etc."
15
+ Ecología,"Ecologismo, ecosistemas, reciclaje, sostenibilidad, etc."
16
+ Economía,"Ciencias económicas, moneda, etc."
17
+ Escultura,"Artes escultóricas, artistas, etc."
18
+ Filosofía,"Filosofía, filósofos, cuestiones filosóficas, obras, etc."
19
+ Física,"Ciencias físicas."
20
+ Folklore Y Mitología,"Cultura, mitos, leyendas, cuentos, etc."
21
+ Fotografía,"Aspectos técnicos del mundo audiovisual, artistas, arte fotográfico, colecciones, etc."
22
+ Geografía,"Orografía, territorios, países, accidentes geográficos"
23
+ Geología,"Mineralogía, geología, etc."
24
+ Historia,"Sucesos históricos, personajes, etc."
25
+ Informática,"Computación, software, hardware, TIC, ciencias e ingeniería informática, programación, etc."
26
+ Lingüística,"Aprendizaje de idiomas, lenguaje en general, aspectos formales de la lengua, traducción..."
27
+ Literatura,"Libros, escritura, autores, colecciones (antiguos y presentes)"
28
+ Medicina,"Preguntas sobre salud, bienestar, biología humana, enfermedades, medicación, tratamientos, etc."
29
+ Meteorología,"Tiempo, fenómenos meteorológicos, clima, etc."
30
+ Microbiología,"Bacterias, virus, amebas y otros seres vivos (o no vivos) a nivel micro"
31
+ Moda,"Diseño, materiales, diseñadores y diseñadoras, premios, colecciones, modelos,"
32
+ Música,"Canciones, grupos, cantantes, compositores, aspectos formales de la música, instrumentos..."
33
+ Ocio,"Chistes, hobbies, cosas varias."
34
+ Pintura,"Artes pictóricas y artistas, obras, museos de arte, colecciones, etc."
35
+ Política,"Políticos, politología, partidos políticos, formas de gobierno, etc."
36
+ Psicología,"Conducta, tratamientos, autores, preguntas técnicas de la ciencia psicológica..."
37
+ Química,"Ciencias químicas, compuestos químicos, elementos, moléculas, reacciones químicas, etc."
38
+ Religión,"Información sobre religión, teología, personajes relevantes para las religiones, festividades religiosas, etc."
39
+ Sociología,"Sociología, ciencias sociales, sociedad, etc."
40
+ Tecnología,"Tecnología más allá de la informática y la computación: telefonía, comunicación, robótica, automoción..."
41
+ Turismo,"Viajes, destinos turísticos, medios de transporte, etc."
42
+ Videojuegos,"Juegos, desarrollo de videojuegos, artistas y desarrolladores, etc."
43
+ Zoología,"Preguntas de biología orientadas a los animales.""
train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
validation.jsonl ADDED
The diff for this file is too large to render. See raw diff