Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
jacobrenn commited on
Commit
cb3683f
1 Parent(s): d76b98f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -1
README.md CHANGED
@@ -6,4 +6,83 @@ tags:
6
  - databricks
7
  - dolly
8
  pretty_name: 'Dataset '
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - databricks
7
  - dolly
8
  pretty_name: 'Dataset '
9
+ ---
10
+
11
+ # databricks-dolly-15k
12
+
13
+ **This dataset was not originally created by AI Squared.** This dataset was curated and created by [Databricks](https://databricks.com).
14
+
15
+ The below text comes from the original release of the dataset's README file in GitHub (available at https://github.com/databrickslabs/dolly/tree/master/data):
16
+
17
+ # Summary
18
+ `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
19
+
20
+ This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
21
+
22
+ Supported Tasks:
23
+ - Training LLMs
24
+ - Synthetic Data Generation
25
+ - Data Augmentation
26
+
27
+ Languages: English
28
+ Version: 1.0
29
+
30
+ **Owner: Databricks, Inc.**
31
+
32
+
33
+ # Dataset Overview
34
+ `databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
35
+ models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
36
+ types of questions and instructions appropriate to each category.
37
+
38
+ Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
39
+
40
+ For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
41
+
42
+
43
+ # Intended Uses
44
+ While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
45
+
46
+ Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
47
+
48
+
49
+ # Dataset
50
+ ## Purpose of Collection
51
+ As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
52
+
53
+ ## Sources
54
+ - **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
55
+ - **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
56
+
57
+ ## Annotator Guidelines
58
+ To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
59
+
60
+ The annotation guidelines for each of the categories are as follows:
61
+
62
+ - **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
63
+ - **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
64
+ - **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
65
+ - **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
66
+ - **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
67
+ - **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
68
+ - **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
69
+
70
+ ## Personal or Sensitive Data
71
+ This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
72
+
73
+ ## Language
74
+ American English
75
+
76
+ # Known Limitations
77
+ - Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
78
+ - Some annotators may not be native English speakers
79
+ - Annotator demographics and subject matter may reflect the makeup of Databricks employees
80
+
81
+ # License/Attribution
82
+ **Copyright (2023) Databricks, Inc.**
83
+ This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
84
+
85
+ Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
86
+
87
+ Wikipedia (various pages) - https://www.wikipedia.org/
88
+ Copyright © Wikipedia editors and contributors.