metadata
dataset_info:
- config_name: annotated
features:
- name: id
dtype: int64
- name: category
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: context
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 11901412
num_examples: 15015
download_size: 7553519
dataset_size: 11901412
- config_name: filtered
features:
- name: id
dtype: int64
- name: category
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: context
dtype: float64
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 4398990
num_examples: 10157
download_size: 2749289
dataset_size: 4398990
configs:
- config_name: annotated
data_files:
- split: train
path: annotated/train-*
- config_name: filtered
data_files:
- split: train
path: filtered/train-*
BSC Dolly 15k EN
Reviewed version from the Argilla Dolly v2 English version, originally created by Databricks.
We provide two subsets: "annotated", where some instances were labelled with potential problems; and "filtered", which only contains the instances without the issues that we observed.
Annotation process
While analysing the Argilla Dolly v2 English version, we observed the following:
Task classification: - There are three classes with context: 'Closed QA', 'Information Extraction' and 'Summarization'. The rest without context. - Context is not necessary in all cases and there are instructions that already contain context. - Incorrect categories (the intention does not always correspond to the category).
Confusion between "Summarization" and "Open Generative QA" / "Information Extraction" tasks:
- Tasks categorized as "Summarization" have in some cases the intent of "Open Generative QA" / "Information Extraction", and due to their dependency on context, the answer is longer.
To note:
- 15,014 examples, half of "QA" type in various formats.
- 70% have no context; when they do, they come from the first part of Wikipedia.
- Many answers are also from Wikipedia.
- Possible improvements in cleaning up text extracted from Wikipedia and handling acronyms.
Errors in the dataset:
- Some summaries are longer than the original text.
- Some contexts in "Information Extraction" do not contain the exact information to answer the question asked.
- There are many repeated questions that are kept because the answer is different in each case.
From the previous observations, we performed the following processing:
Processed "context" column to:
- Remove spellings, citations, or unit conversions inside (parenthesis) and [brackets].
- Removed source webpage links.
Removed: - Summary instances where intent is clear & response is longer than context (63) - Instances where the information is not explicitly mentioned in the context (3) - Instances with webpage links in the response or instruction (29) - Exact (instruction/context/response) duplicates (14) - Instruction/context duplicates (9) - Instances where instruction is most similar to the response (6)
Changes:
- Some instances in Summarization/Information Extraction/ Closed QA are lacking context after Argilla's curation process. These instances are moved to General QA since they have no longer context and ask about specifics (86).