Technotech commited on
Commit
cd1074b
1 Parent(s): 45574a7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ inference: false
4
+ tags:
5
+ - text-generation
6
+ - opt
7
+
8
+ license: other
9
+ commercial: false
10
+ ---
11
+
12
+ ## OPT-125m-4bit-128g
13
+
14
+ OPT 125M, quantised to 4bit using AutoGPTQ, with groupsize 128g, no act order.
15
+ Good for testing AutoGPTQ with a small model download.
16
+
17
+ # Original Model Card
18
+
19
+ # OPT : Open Pre-trained Transformer Language Models
20
+
21
+ OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
22
+
23
+ **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
24
+ Content from **this** model card has been written by the Hugging Face team.
25
+
26
+ ## Intro
27
+
28
+ To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
29
+
30
+
31
+ > Large language models trained on massive text collections have shown surprising emergent
32
+ > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
33
+ > can interact with these models through paid APIs, full model access is currently limited to only a
34
+ > few highly resourced labs. This restricted access has limited researchers’ ability to study how and
35
+ > why these large language models work, hindering progress on improving known challenges in areas
36
+ > such as robustness, bias, and toxicity.
37
+
38
+ > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
39
+ > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
40
+ > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
41
+ > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
42
+ > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
43
+ > collective research community as a whole, which is only possible when models are available for study.
44
+
45
+ ## Model description
46
+
47
+ OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
48
+ OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
49
+
50
+ For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
51
+ the [official paper](https://arxiv.org/abs/2205.01068).
52
+ ## Intended uses & limitations
53
+
54
+ The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
55
+ In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
56
+
57
+ ### How to use
58
+
59
+ You can use this model directly with a pipeline for text generation.
60
+
61
+ ```python
62
+ >>> from transformers import pipeline
63
+
64
+ >>> generator = pipeline('text-generation', model="facebook/opt-125m")
65
+ >>> generator("Hello, I'm am conscious and")
66
+ [{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}]
67
+ ```
68
+
69
+ By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
70
+
71
+ ```python
72
+ >>> from transformers import pipeline, set_seed
73
+
74
+ >>> set_seed(32)
75
+ >>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True)
76
+ >>> generator("Hello, I'm am conscious and")
77
+ [{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}]
78
+ ```
79
+
80
+ ### Limitations and bias
81
+
82
+ As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
83
+ unfiltered content from the internet, which is far from neutral the model is strongly biased :
84
+
85
+ > Like other large language models for which the diversity (or lack thereof) of training
86
+ > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
87
+ > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
88
+ > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
89
+ > large language models.
90
+
91
+ This bias will also affect all fine-tuned versions of this model.
92
+
93
+ ## Training data
94
+
95
+ The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
96
+
97
+ - BookCorpus, which consists of more than 10K unpublished books,
98
+ - CC-Stories, which contains a subset of CommonCrawl data filtered to match the
99
+ story-like style of Winograd schemas,
100
+ - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
101
+ - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
102
+ Roller et al. (2021)
103
+ - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
104
+ dataset that was used in RoBERTa (Liu et al., 2019b)
105
+
106
+ The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
107
+ to each dataset’s size in the pretraining corpus.
108
+
109
+ The dataset might contains offensive content as parts of the dataset are a subset of
110
+ public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
111
+ that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
112
+
113
+ ### Collection process
114
+
115
+ The dataset was collected form internet, and went through classic data processing algorithms and
116
+ re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
117
+ *This ebook by Project Gutenberg.*
118
+
119
+ ## Training procedure
120
+
121
+
122
+
123
+ ### Preprocessing
124
+
125
+ The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
126
+ vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
127
+
128
+ The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
129
+
130
+ ### BibTeX entry and citation info
131
+
132
+ ```bibtex
133
+ @misc{zhang2022opt,
134
+ title={OPT: Open Pre-trained Transformer Language Models},
135
+ author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
136
+ year={2022},
137
+ eprint={2205.01068},
138
+ archivePrefix={arXiv},
139
+ primaryClass={cs.CL}
140
+ }
141
+ ```