matthayes commited on
Commit
14216d0
1 Parent(s): 50cd413

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md CHANGED
@@ -1,3 +1,104 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ inference: false
7
  ---
8
+ # dolly-v2-2.8b Model Card
9
+ ## Summary
10
+
11
+ Databricks’ `dolly-v2-2.8b`, an instruction-following large language model trained on the Databricks machine learning platform
12
+ that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records
13
+ [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
14
+ by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
15
+ information extraction, open QA and summarization. `dolly-v2-2.8b` is not a state-of-the-art model, but does exhibit surprisingly
16
+ high quality instruction following behavior not characteristic of the foundation model on which it is based.
17
+
18
+ **Owner**: Databricks, Inc.
19
+
20
+ ## Model Overview
21
+ `dolly-v2-2.8b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
22
+ [EleutherAI’s](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned
23
+ on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
24
+
25
+ ## Usage
26
+
27
+ To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
28
+ In a Databricks notebook you could run:
29
+
30
+ ```
31
+ %pip install accelerate>=0.12.0 transformers[torch]==4.25.1
32
+ ```
33
+
34
+ The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
35
+ found in the model repo [here](https://huggingface.co/databricks/dolly-v2-2.8b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
36
+ Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
37
+ It is also fine to remove it if there is sufficient memory.
38
+
39
+ ```
40
+ import torch
41
+ from transformers import pipeline
42
+
43
+ generate_text = pipeline(model="databricks/dolly-v2-2.8b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
44
+ ```
45
+
46
+ You can then use the pipeline to answer instructions:
47
+
48
+ ```
49
+ generate_text("Explain to me the difference between nuclear fission and fusion.")
50
+ ```
51
+
52
+ Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-2.8b/blob/main/instruct_pipeline.py),
53
+ store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
54
+
55
+ ```
56
+ from instruct_pipeline import InstructionTextGenerationPipeline
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-2.8b", padding_side="left")
60
+ model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-2.8b", device_map="auto")
61
+
62
+ generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
63
+ ```
64
+
65
+
66
+ ## Known Limitations
67
+
68
+ ### Performance Limitations
69
+ **`dolly-v2-2.8b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
70
+ competitively with more modern model architectures or models subject to larger pretraining corpuses.
71
+
72
+ The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
73
+ In particular, `dolly-v2-2.8b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
74
+ dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
75
+ Moreover, we find that `dolly-v2-2.8b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
76
+
77
+ ### Dataset Limitations
78
+ Like all language models, `dolly-v2-2.8b` reflects the content and limitations of its training corpuses.
79
+
80
+ - **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
81
+ it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
82
+ in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
83
+ associations.
84
+
85
+ - **`databricks-dolly-15k`**: The training data on which `dolly-v2-2.8b` is instruction tuned represents natural language instructions generated
86
+ by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
87
+ for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
88
+ personally identifying information about non-public figures, but it may contain typos and factual errors.
89
+ The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
90
+ the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
91
+
92
+ Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
93
+ maximize the potential of all individuals and organizations.
94
+
95
+ ### Benchmark Metrics
96
+
97
+ Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
98
+ model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-2.8b` is not state of the art,
99
+ and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
100
+ but a robust statement as to the sources of these variations requires further study.
101
+
102
+ TODO benchmarks
103
+
104
+ # Happy Hacking!