mrm8488 commited on
Commit
eba88ee
1 Parent(s): 2193102

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -0
README.md CHANGED
@@ -1,3 +1,173 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ tags:
4
+ - galactica
5
+
6
+ widget:
7
+ - text: "The Transformer architecture [START_REF]"
8
+ - text: "The Schwarzschild radius is defined as: \\["
9
+ - text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
10
+ - text: "Lecture 1: The Ising Model\n\n"
11
+ - text: "[START_I_SMILES]"
12
+ - text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
13
  ---
14
+
15
+ # GALACTICA 120 B (huge)
16
+
17
+ Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
18
+
19
+ Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
20
+
21
+ ## Model Details
22
+
23
+ The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
24
+
25
+ | Size | Parameters |
26
+ |:-----------:|:-----------:|
27
+ | `mini` | 125 M |
28
+ | `base` | 1.3 B |
29
+ | `standard` | 6.7 B |
30
+ | `large` | 30 B |
31
+ | `huge` | 120 B |
32
+
33
+
34
+ ## Release Date
35
+
36
+ November 2022
37
+
38
+ ## Model Type
39
+
40
+ Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
41
+
42
+ ## Paper & Demo
43
+
44
+ [Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
45
+
46
+ ## Model Use
47
+
48
+ The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
49
+
50
+ The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
51
+
52
+ ## Training Data
53
+
54
+ The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
55
+
56
+ ## How to use
57
+
58
+ Find below some example scripts on how to use the model in `transformers`:
59
+
60
+ ## Using the Pytorch model
61
+
62
+ ### Running the model on a CPU
63
+
64
+ <details>
65
+ <summary> Click to expand </summary>
66
+
67
+ ```python
68
+
69
+ from transformers import AutoTokenizer, OPTForCausalLM
70
+
71
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
72
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-120b")
73
+
74
+ input_text = "The Transformer architecture [START_REF]"
75
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
76
+
77
+ outputs = model.generate(input_ids)
78
+ print(tokenizer.decode(outputs[0]))
79
+ ```
80
+
81
+ </details>
82
+
83
+ ### Running the model on a GPU
84
+
85
+ <details>
86
+ <summary> Click to expand </summary>
87
+
88
+ ```python
89
+ # pip install accelerate
90
+ from transformers import AutoTokenizer, OPTForCausalLM
91
+
92
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
93
+ OPTForCausalLM.from_pretrained("facebook/galactica-120b", device_map="auto")
94
+
95
+ input_text = "The Transformer architecture [START_REF]"
96
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
97
+
98
+ outputs = model.generate(input_ids)
99
+ print(tokenizer.decode(outputs[0]))
100
+ ```
101
+
102
+ </details>
103
+
104
+ ### Running the model on a GPU using different precisions
105
+
106
+ #### FP16
107
+
108
+ <details>
109
+ <summary> Click to expand </summary>
110
+
111
+ ```python
112
+ # pip install accelerate
113
+ import torch
114
+ from transformers import AutoTokenizer, OPTForCausalLM
115
+
116
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
117
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-120b", device_map="auto", torch_dtype=torch.float16)
118
+
119
+ input_text = "The Transformer architecture [START_REF]"
120
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
121
+
122
+ outputs = model.generate(input_ids)
123
+ print(tokenizer.decode(outputs[0]))
124
+ ```
125
+
126
+ </details>
127
+
128
+ #### INT8
129
+
130
+ <details>
131
+ <summary> Click to expand </summary>
132
+
133
+ ```python
134
+ # pip install bitsandbytes accelerate
135
+ from transformers import AutoTokenizer, OPTForCausalLM
136
+
137
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
138
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-120b", device_map="auto", load_in_8bit=True)
139
+
140
+ input_text = "The Transformer architecture [START_REF]"
141
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
142
+
143
+ outputs = model.generate(input_ids)
144
+ print(tokenizer.decode(outputs[0]))
145
+ ```
146
+
147
+ </details>
148
+
149
+
150
+ ## Performance and Limitations
151
+
152
+ The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
153
+
154
+ As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
155
+
156
+ In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
157
+
158
+ ## Broader Implications
159
+
160
+ GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
161
+
162
+ We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
163
+
164
+
165
+ ## Citation
166
+
167
+ ```bibtex
168
+ @inproceedings{GALACTICA,
169
+ title={GALACTICA: A Large Language Model for Science},
170
+ author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
171
+ year={2022}
172
+ }
173
+ ```