srowen commited on
Commit
43925aa
1 Parent(s): d0aa7ea

Trivial: standardize single curly quotes (don't ask)

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
  # dolly-v2-12b Model Card
11
  ## Summary
12
 
13
- Databricks `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
14
  that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
15
  [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
16
  by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
@@ -29,7 +29,7 @@ running inference for various GPU configurations.
29
 
30
  ## Model Overview
31
  `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
32
- [EleutherAIs](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
33
  on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
34
 
35
  ## Usage
@@ -139,7 +139,7 @@ Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as w
139
  ### Dataset Limitations
140
  Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
141
 
142
- - **The Pile**: GPT-Js pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
143
  it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
144
  in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
145
  associations.
 
10
  # dolly-v2-12b Model Card
11
  ## Summary
12
 
13
+ Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
14
  that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
15
  [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
16
  by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
 
29
 
30
  ## Model Overview
31
  `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
32
+ [EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
33
  on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
34
 
35
  ## Usage
 
139
  ### Dataset Limitations
140
  Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
141
 
142
+ - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
143
  it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
144
  in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
145
  associations.