RichardErkhov
commited on
Commit
•
286741e
1
Parent(s):
6c1003f
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,206 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
dolly-v2-12b - bnb 4bits
|
11 |
+
- Model creator: https://huggingface.co/databricks/
|
12 |
+
- Original model: https://huggingface.co/databricks/dolly-v2-12b/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: mit
|
20 |
+
language:
|
21 |
+
- en
|
22 |
+
library_name: transformers
|
23 |
+
inference: false
|
24 |
+
datasets:
|
25 |
+
- databricks/databricks-dolly-15k
|
26 |
+
---
|
27 |
+
# dolly-v2-12b Model Card
|
28 |
+
## Summary
|
29 |
+
|
30 |
+
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
|
31 |
+
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
|
32 |
+
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
|
33 |
+
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
|
34 |
+
information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
|
35 |
+
high quality instruction following behavior not characteristic of the foundation model on which it is based.
|
36 |
+
|
37 |
+
Dolly v2 is also available in these smaller models sizes:
|
38 |
+
|
39 |
+
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
|
40 |
+
* [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b`
|
41 |
+
|
42 |
+
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
|
43 |
+
running inference for various GPU configurations.
|
44 |
+
|
45 |
+
**Owner**: Databricks, Inc.
|
46 |
+
|
47 |
+
## Model Overview
|
48 |
+
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
|
49 |
+
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
|
50 |
+
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
|
51 |
+
|
52 |
+
## Usage
|
53 |
+
|
54 |
+
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
|
55 |
+
In a Databricks notebook you could run:
|
56 |
+
|
57 |
+
```python
|
58 |
+
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
|
59 |
+
```
|
60 |
+
|
61 |
+
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
|
62 |
+
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
|
63 |
+
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
|
64 |
+
It is also fine to remove it if there is sufficient memory.
|
65 |
+
|
66 |
+
```python
|
67 |
+
import torch
|
68 |
+
from transformers import pipeline
|
69 |
+
|
70 |
+
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
|
71 |
+
```
|
72 |
+
|
73 |
+
You can then use the pipeline to answer instructions:
|
74 |
+
|
75 |
+
```python
|
76 |
+
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
|
77 |
+
print(res[0]["generated_text"])
|
78 |
+
```
|
79 |
+
|
80 |
+
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
|
81 |
+
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
|
82 |
+
|
83 |
+
```python
|
84 |
+
import torch
|
85 |
+
from instruct_pipeline import InstructionTextGenerationPipeline
|
86 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
87 |
+
|
88 |
+
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
|
89 |
+
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16)
|
90 |
+
|
91 |
+
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
|
92 |
+
```
|
93 |
+
|
94 |
+
### LangChain Usage
|
95 |
+
|
96 |
+
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
|
97 |
+
and the default for the pipeline is to only return the new text.
|
98 |
+
|
99 |
+
```python
|
100 |
+
import torch
|
101 |
+
from transformers import pipeline
|
102 |
+
|
103 |
+
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
|
104 |
+
trust_remote_code=True, device_map="auto", return_full_text=True)
|
105 |
+
```
|
106 |
+
|
107 |
+
You can create a prompt that either has only an instruction or has an instruction with context:
|
108 |
+
|
109 |
+
```python
|
110 |
+
from langchain import PromptTemplate, LLMChain
|
111 |
+
from langchain.llms import HuggingFacePipeline
|
112 |
+
|
113 |
+
# template for an instrution with no input
|
114 |
+
prompt = PromptTemplate(
|
115 |
+
input_variables=["instruction"],
|
116 |
+
template="{instruction}")
|
117 |
+
|
118 |
+
# template for an instruction with input
|
119 |
+
prompt_with_context = PromptTemplate(
|
120 |
+
input_variables=["instruction", "context"],
|
121 |
+
template="{instruction}\n\nInput:\n{context}")
|
122 |
+
|
123 |
+
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
|
124 |
+
|
125 |
+
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
|
126 |
+
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
|
127 |
+
```
|
128 |
+
|
129 |
+
Example predicting using a simple instruction:
|
130 |
+
|
131 |
+
```python
|
132 |
+
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
|
133 |
+
```
|
134 |
+
|
135 |
+
Example predicting using an instruction with context:
|
136 |
+
|
137 |
+
```python
|
138 |
+
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
|
139 |
+
and Founding Father who served as the first president of the United States from 1789 to 1797."""
|
140 |
+
|
141 |
+
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
|
142 |
+
```
|
143 |
+
|
144 |
+
|
145 |
+
## Known Limitations
|
146 |
+
|
147 |
+
### Performance Limitations
|
148 |
+
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
|
149 |
+
competitively with more modern model architectures or models subject to larger pretraining corpuses.
|
150 |
+
|
151 |
+
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
|
152 |
+
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
|
153 |
+
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
|
154 |
+
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
|
155 |
+
|
156 |
+
### Dataset Limitations
|
157 |
+
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
|
158 |
+
|
159 |
+
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
|
160 |
+
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
|
161 |
+
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
|
162 |
+
associations.
|
163 |
+
|
164 |
+
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
|
165 |
+
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
|
166 |
+
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
|
167 |
+
personally identifying information about non-public figures, but it may contain typos and factual errors.
|
168 |
+
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
|
169 |
+
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
|
170 |
+
|
171 |
+
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
|
172 |
+
maximize the potential of all individuals and organizations.
|
173 |
+
|
174 |
+
### Benchmark Metrics
|
175 |
+
|
176 |
+
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
|
177 |
+
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art,
|
178 |
+
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
|
179 |
+
but a robust statement as to the sources of these variations requires further study.
|
180 |
+
|
181 |
+
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
|
182 |
+
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
|
183 |
+
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
|
184 |
+
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
|
185 |
+
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
|
186 |
+
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
|
187 |
+
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
|
188 |
+
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
|
189 |
+
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
|
190 |
+
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
|
191 |
+
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
|
192 |
+
|
193 |
+
# Citation
|
194 |
+
|
195 |
+
```
|
196 |
+
@online{DatabricksBlog2023DollyV2,
|
197 |
+
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
|
198 |
+
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
|
199 |
+
year = {2023},
|
200 |
+
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
|
201 |
+
urldate = {2023-06-30}
|
202 |
+
}
|
203 |
+
```
|
204 |
+
|
205 |
+
# Happy Hacking!
|
206 |
+
|