mike-conover-db commited on
Commit
63b9554
1 Parent(s): 631547e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -25
README.md CHANGED
@@ -11,28 +11,24 @@ inference: false
11
  ## Summary
12
 
13
  Databricks’ `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
14
- that is licensed for commercial use. based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records generated
 
15
  by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
16
- information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
17
  high quality instruction following behavior not characteristic of the foundation model on which it is based.
18
- We believe this finding is important because it demonstrates that the ability to create powerful artificial intelligence technologies is vastly more accessible than previously realized.
19
 
20
- Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world.
21
-
22
- **Owner**: Databricks, Inc.
23
 
24
  ## Model Overview
25
  `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
26
  [EleutherAI’s](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
27
  on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA)
28
 
29
- [MATT.HAYES]
30
- The [original version](https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html) of was Dolly was trained using [deepspeed](https://github.com/microsoft/DeepSpeed) [ZeRO 3](https://github.com/microsoft/DeepSpeed/blob/master/docs/code-docs/source/zero3.rst)
31
- on the [Databricks Machine Learning Platform](https://www.databricks.com/product/machine-learning) in just 30 minutes (1 epoch) using a single
32
- [NDasrA100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nda100-v4-series) machine with 8x A100 40GB GPUs.
33
- The most recent `dolly-v2-12b` checkpoint was trained for 10 epochs on the same hardware.
34
-
35
  ## Known Limitations
 
 
 
 
36
  **`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
37
  competitively with more modern model architectures or models subject to larger pretraining corpuses.
38
 
@@ -40,7 +36,7 @@ The Dolly model family is under active development, and so any list of shortcomi
40
  dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
41
  Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
42
 
43
- ## Training Data, Bias & Objectionable Content
44
  Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
45
 
46
  - **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
@@ -48,29 +44,21 @@ it contains content many users would find objectionable. As such, the model is l
48
  in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
49
  associations.
50
 
51
- [LEGAL TO REVIEW]
52
  - **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
53
- by Databricks employees during a 10 day period spanning March and April 2023 and includes passages from Wikipedia as references passages
54
  for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
55
  personally identifying information about non-public figures, but it may contain typos and factual errors.
 
 
56
 
57
  Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
58
  maximize the potential of all individuals and organizations.
59
 
60
- ## Intended Uses
61
- [LEGAL]
62
-
63
- [MIKE.CONOVER]
64
-
65
- ## Usage
66
-
67
- [MATT.HAYES]
68
-
69
  ### Benchmark Metrics
70
 
71
  Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)
72
  model results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that `dolly-v2-12b` is not state of the art,
73
- and in fact underperforms `dolly-v2-12b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
74
  but a robust statement as to the sources of these variations requires further study.
75
 
76
  ```
 
11
  ## Summary
12
 
13
  Databricks’ `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
14
+ that is licensed for commercial use. based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
15
+ [`databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k) generated
16
  by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
17
+ information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
18
  high quality instruction following behavior not characteristic of the foundation model on which it is based.
 
19
 
20
+ **Owner**: Databricks, Inc.,
 
 
21
 
22
  ## Model Overview
23
  `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
24
  [EleutherAI’s](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
25
  on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA)
26
 
 
 
 
 
 
 
27
  ## Known Limitations
28
+ Databricks is committed to ongoing research and development efforts to develop helpful, honest and low risk AI technologies that
29
+ maximize the potential of all individuals and organizations.
30
+
31
+ ### Performance Limitations
32
  **`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
33
  competitively with more modern model architectures or models subject to larger pretraining corpuses.
34
 
 
36
  dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
37
  Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
38
 
39
+ ### Dataset Limitations
40
  Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
41
 
42
  - **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
 
44
  in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
45
  associations.
46
 
 
47
  - **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
48
+ by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
49
  for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
50
  personally identifying information about non-public figures, but it may contain typos and factual errors.
51
+ The dataset may also reflect biases found in Wikipedia, such as the tendency towards factual errors. Finally, the dataset likely reflects
52
+ the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
53
 
54
  Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
55
  maximize the potential of all individuals and organizations.
56
 
 
 
 
 
 
 
 
 
 
57
  ### Benchmark Metrics
58
 
59
  Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)
60
  model results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that `dolly-v2-12b` is not state of the art,
61
+ and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
62
  but a robust statement as to the sources of these variations requires further study.
63
 
64
  ```