MentorCA / README.md
mmarimon's picture
Update README.md
b33e42b verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
  - text-generation
  - summarization
language:
  - ca
pretty_name: Mentor_CA
size_categories:
  - 1K<n<10K

Dataset Description

Dataset Summary

Mentor_CA is an open source dataset of 10,175 instructions in Catalan, machine translated from the original Mentor_ES dataset in Spanish, and organized in several of the behavioral categories outlined in the InstructGPT paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.

Supported Tasks and Leaderboards

Useful for fine-tuning instructions in large language models for downstream tasks.

Languages

This dataset is in Catalan (ca-ES).

Dataset Structure

Data Instances

The dataset is provided in JSON format, with the same fields as in the Dolly databricks dataset, where each records corresponds to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.

category instruction context response
open_qa Qui va inventar el nus de corbata més usat del món? L'inventor del nus de corbata més usat del món el va inventar Eduard VIII, duc de Windsor.

Data Fields

  • category: text string containing the type of instruction.
  • instruction: text string containing the prompt.
  • context: text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.
  • answer: text string containing the response to the instruction.

Data Splits

We do not provide canonical splits for Mentor_CA other than the categories used for generating the dataset.

Category Number of instructions
Open_QA 2500
General_QA 1500
Classification 1450
Closed_QA 1250
Brainstorming 1200
Information_extraction 1000
Summarization 800
Creative_writing 475

Dataset Creation

Curation Rationale

Mentor_CA is an open-source dataset of 10,175 records to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the context field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the topic list file. Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.

Source Data

  • Human-generated data: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.
  • Web: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.

Initial Data Collection and Normalization

To create a dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.

The guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.

Who are the source language producers?

The data was generated entirely by native Spanish annotators. Text obtained from the web for the context field was kept as is, while the response field was rewritten.

Annotations

The annotation guidelines for each of the categories are as follows:

  • Closed QA (closed_qa): Questions that can only be answered from a reference text. The annotators must provide a text from any web page and ask a question whose answer is found in the text.
  • Open QA (open_qa): Questions of common culture that can be answered without consulting any source or with a simple search on the Internet.
  • General QA (general_qa): Questions that are very general and do not necessarily have to be objective. In fact, it is desirable that they be as subjective as possible.
  • Classification (classification): Questions that serve to obtain classifications or categorizations of a list of items in different categories to which they may belong.
  • Information Extraction (inf_ext): Questions used to extract a list of data or information from a reference text.
  • Summarization (summarization): Questions to ask for a summary or synthesis of a text provided by the annotator.
  • Creative Writing (creative_wr): Questions that should be order-oriented to obtain an original text (a story, a letter, a song, an article, a poem, a narrative, etc.). original text (a story, a letter, a song, an article, a poem, a narrative, etc.).
  • Brainstorming (brainstorming): Questions to obtain a list of ideas or possible options to an issue.

Annotation process

The annotators were divided into two groups, with one group collecting reference text and asking a question, and the other group providing a response to the instruction.

Who are the annotators?

While labels and text were produced by humans, no further information about the people or systems involved was provided when creating this resource.

Personal and Sensitive Information

This dataset contains public information (e.g., some information from the web). To our knowledge, there are no private person’s personal identifiers or sensitive information.

Considerations for Using the Data

Social Impact of Dataset

[N/A]

Discussion of Biases

[N/A]

Other Known Limitations

  • The contents of this dataset may reflect the bias, factual errors and topical focus found in the web.
  • Annotator demographics and subject matter may reflect the makeup of the annotators.

Additional Information

Dataset Curators

Language Technologies Unit (langtech@bsc.es) at the Barcelona Supercomputing Center (BSC).

This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.

Licensing Information

This dataset can be used for any purpose, whether academic or commercial, under the terms of the CC BY 4.0. Give appropriate credit , provide a link to the license, and indicate if changes were made.

Citation Information

[N/A]

Contributions

[N/A]