iansotnek commited on
Commit
beaa6bf
1 Parent(s): 50cf1b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
2
  license: other
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ commercial: false
4
+ datasets:
5
+ - aisquared/databricks-dolly-15k
6
+ language:
7
+ - en
8
+ library_name: transformers
9
  ---
10
+
11
+
12
+ # Model Card for `chopt-2_7b`
13
+
14
+ <!-- Provide a quick summary of what the model is/does. -->
15
+
16
+ AI Squared's `chopt-2_7b` is a large language model which is derived from Meta AI's Open Pre-trained Transformer language modelsand fine-tuned on a corpus of 15k records ([Databricks' "Dolly 15k" Dataset](https://huggingface.co/datasets/aisquared/databricks-dolly-15k)) to help it exhibit chat-based capabilities. Despite the permissive license of the Dolly 15k dataset, due to this model being a derivative of OPT it is restricted to use for **non-commercial research purposes**. The ChOPT family of models from AI Squared are licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
17
+
18
+ While `chopt-2_7b` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought.
19
+
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+ - **Developed by:** AI Squared, Inc.
26
+ - **Shared by:** AI Squared, Inc.
27
+ - **Model type:** Large Language Model
28
+ - **Language(s) (NLP):** EN
29
+ - **License:** other
30
+ - **Finetuned from model:** OPT
31
+
32
+
33
+ ## Bias, Risks, and Limitations
34
+
35
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
36
+
37
+ **`chopt-2_7b` is not a state-of-the-art language model.** `chopt-2_7b` is an experimental technology and is not designed for use in any
38
+ environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include,
39
+ but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations.
40
+ Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology.
41
+
42
+
43
+ ## Usage
44
+
45
+ The code below shows how to use `chopt-2_7b` in the way which it was trained. While the model can be used "out of the box" using the
46
+ `transformers` library, using the function defined below to create a response from the model will achieve better results.
47
+
48
+ ### Load Model and Tokenizer from this Repository Using the `transformers` Package
49
+
50
+ ```python
51
+ from transformers import AutoModelForCausalLM, AutoTokenizer
52
+ import numpy as np
53
+ import re
54
+
55
+ model_id = 'aisquared/chopt-2_7b'
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side = 'left')
58
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code = True, device_map = 'auto')
59
+ ```
60
+
61
+
62
+ ### Create the Prompt Format and Other Variables
63
+
64
+ ```python
65
+ PROMPT = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
66
+
67
+ ### Instruction:
68
+ {instruction}
69
+
70
+ ### Response:
71
+ """
72
+
73
+ END_KEY = '### End'
74
+ RESPONSE_KEY = '### Response:\n'
75
+ ```
76
+
77
+
78
+ ### Create a Function to Retrieve a Response
79
+
80
+ ```python
81
+ def create_response(
82
+ instruction,
83
+ model,
84
+ tokenizer,
85
+ do_sample = True,
86
+ max_new_tokens = 256,
87
+ top_p = 0.92,
88
+ top_k = 0,
89
+ **kwargs
90
+ ):
91
+ """
92
+ Create a response from the model by using a formatted prompt
93
+ """
94
+ input_ids = tokenizer(
95
+ PROMPT.format(instruction=instruction), return_tensors="pt"
96
+ ).input_ids
97
+
98
+ gen_tokens = model.generate(
99
+ input_ids,
100
+ pad_token_id=tokenizer.pad_token_id,
101
+ do_sample=do_sample,
102
+ max_new_tokens=max_new_tokens,
103
+ top_p=top_p,
104
+ top_k=top_k,
105
+ **kwargs,
106
+ )
107
+ decoded = tokenizer.batch_decode(gen_tokens)[0]
108
+
109
+ # The response appears after "### Response:". The model has been trained to append "### End" at the end.
110
+ m = re.search(r"#+\s*Response:\s*(.+?)#+\s*End", decoded, flags=re.DOTALL)
111
+
112
+ response = None
113
+ if m:
114
+ response = m.group(1).strip()
115
+ else:
116
+ # The model might not generate the "### End" sequence before reaching the max tokens. In this case, return
117
+ # everything after "### Response:".
118
+ m = re.search(r"#+\s*Response:\s*(.+)", decoded, flags=re.DOTALL)
119
+ if m:
120
+ response = m.group(1).strip()
121
+ else:
122
+ pass
123
+ return response
124
+ ```
125
+
126
+ ### Model Performance Metrics
127
+
128
+ We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the ChOPT family.
129
+ Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are
130
+ state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size.
131
+
132
+ | Model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq |
133
+ |:--------------------|-------------:|-----------:|-------------:|------------:|----------------:|---------:|---------:|
134
+ | chopt-125m | 0.178 | 0.443182 | 0.501973 | 0.294165 | 0.197099 | 0.630577 | 0.476758 |
135
+ | chopt-research-125m | 0.17 | 0.436027 | 0.503552 | 0.294762 | 0.205631 | 0.62568 | 0.48685 |
136
+ | opt-125m | 0.166 | 0.435606 | 0.501973 | 0.291775 | 0.190273 | 0.6284 | 0.554434 |
137
+ | chopt-350m | 0.178 | 0.450758 | 0.508287 | 0.325334 | 0.21843 | 0.650707 | 0.559633 |
138
+ | opt_350m | 0.176 | 0.441077 | 0.52644 | 0.320056 | 0.207338 | 0.645267 | 0.57737 |
139
+ | chopt-research-350m | 0.172 | 0.462542 | 0.514601 | 0.327524 | 0.235495 | 0.643634 | 0.589908 |
140
+ | opt-1.3b | 0.234 | 0.569865 | 0.596685 | 0.414957 | 0.232935 | 0.718172 | 0.577676 |
141
+ | chopt-research-1_3b | 0.232 | 0.564815 | 0.59116 | 0.424716 | 0.276451 | 0.713275 | 0.634557 |
142
+ | chopt-1_3b | 0.236 | 0.569444 | 0.584057 | 0.42621 | 0.268771 | 0.723069 | 0.658104 |
143
+ | opt-2.7b | 0.25 | 0.608165 | 0.608524 | 0.458176 | 0.267918 | 0.738303 | 0.603058 |
144
+ | chopt-2_7b | 0.276 | 0.616582 | 0.601421 | 0.472615 | 0.288396 | 0.75136 | 0.552294 |
145
+ | chopt-research-2_7b | 0.262 | 0.610269 | 0.625099 | 0.458176 | 0.295222 | 0.742111 | 0.636697 |