TheBloke commited on
Commit
462e675
1 Parent(s): f343315

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +487 -0
README.md ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: allenai/digital-socrates-7b
3
+ inference: false
4
+ language: en
5
+ library_name: transformers
6
+ license: apache-2.0
7
+ model_creator: Allen Institute for AI
8
+ model_name: Digital Socrates 7B
9
+ model_type: llama
10
+ prompt_template: '[INST] <<SYS>>
11
+
12
+ {system_message}
13
+
14
+ <</SYS>>
15
+
16
+ {prompt} [/INST]
17
+
18
+ '
19
+ quantized_by: TheBloke
20
+ ---
21
+ <!-- markdownlint-disable MD041 -->
22
+
23
+ <!-- header start -->
24
+ <!-- 200823 -->
25
+ <div style="width: auto; margin-left: auto; margin-right: auto">
26
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
27
+ </div>
28
+ <div style="display: flex; justify-content: space-between; width: 100%;">
29
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
31
+ </div>
32
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
34
+ </div>
35
+ </div>
36
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
37
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
38
+ <!-- header end -->
39
+
40
+ # Digital Socrates 7B - AWQ
41
+ - Model creator: [Allen Institute for AI](https://huggingface.co/allenai)
42
+ - Original model: [Digital Socrates 7B](https://huggingface.co/allenai/digital-socrates-7b)
43
+
44
+ <!-- description start -->
45
+ ## Description
46
+
47
+ This repo contains AWQ model files for [Allen Institute for AI's Digital Socrates 7B](https://huggingface.co/allenai/digital-socrates-7b).
48
+
49
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
50
+
51
+
52
+ ### About AWQ
53
+
54
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
55
+
56
+ It is supported by:
57
+
58
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
59
+ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
60
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
61
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
62
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
63
+
64
+ <!-- description end -->
65
+ <!-- repositories-available start -->
66
+ ## Repositories available
67
+
68
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/digital-socrates-7B-AWQ)
69
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/digital-socrates-7B-GPTQ)
70
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/digital-socrates-7B-GGUF)
71
+ * [Allen Institute for AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/allenai/digital-socrates-7b)
72
+ <!-- repositories-available end -->
73
+
74
+ <!-- prompt-template start -->
75
+ ## Prompt template: Llama-2-Chat
76
+
77
+ ```
78
+ [INST] <<SYS>>
79
+ {system_message}
80
+ <</SYS>>
81
+ {prompt} [/INST]
82
+
83
+ ```
84
+
85
+ <!-- prompt-template end -->
86
+ <!-- licensing start -->
87
+ ## Licensing
88
+
89
+ The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
90
+
91
+ As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
92
+
93
+ In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Allen Institute for AI's Digital Socrates 7B](https://huggingface.co/allenai/digital-socrates-7b).
94
+ <!-- licensing end -->
95
+ <!-- README_AWQ.md-provided-files start -->
96
+ ## Provided files, and AWQ parameters
97
+
98
+ I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
99
+
100
+ Models are released as sharded safetensors files.
101
+
102
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
103
+ | ------ | ---- | -- | ----------- | ------- | ---- |
104
+ | [main](https://huggingface.co/TheBloke/digital-socrates-7B-AWQ/tree/main) | 4 | 128 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.89 GB
105
+
106
+ <!-- README_AWQ.md-provided-files end -->
107
+
108
+ <!-- README_AWQ.md-text-generation-webui start -->
109
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
110
+
111
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
112
+
113
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
114
+
115
+ 1. Click the **Model tab**.
116
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/digital-socrates-7B-AWQ`.
117
+ 3. Click **Download**.
118
+ 4. The model will start downloading. Once it's finished it will say "Done".
119
+ 5. In the top left, click the refresh icon next to **Model**.
120
+ 6. In the **Model** dropdown, choose the model you just downloaded: `digital-socrates-7B-AWQ`
121
+ 7. Select **Loader: AutoAWQ**.
122
+ 8. Click Load, and the model will load and is now ready for use.
123
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
124
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
125
+ <!-- README_AWQ.md-text-generation-webui end -->
126
+
127
+ <!-- README_AWQ.md-use-from-vllm start -->
128
+ ## Multi-user inference server: vLLM
129
+
130
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
131
+
132
+ - Please ensure you are using vLLM version 0.2 or later.
133
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
134
+
135
+ For example:
136
+
137
+ ```shell
138
+ python3 -m vllm.entrypoints.api_server --model TheBloke/digital-socrates-7B-AWQ --quantization awq --dtype auto
139
+ ```
140
+
141
+ - When using vLLM from Python code, again set `quantization=awq`.
142
+
143
+ For example:
144
+
145
+ ```python
146
+ from vllm import LLM, SamplingParams
147
+
148
+ prompts = [
149
+ "Tell me about AI",
150
+ "Write a story about llamas",
151
+ "What is 291 - 150?",
152
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
153
+ ]
154
+ prompt_template=f'''[INST] <<SYS>>
155
+ {system_message}
156
+ <</SYS>>
157
+ {prompt} [/INST]
158
+ '''
159
+
160
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
161
+
162
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
163
+
164
+ llm = LLM(model="TheBloke/digital-socrates-7B-AWQ", quantization="awq", dtype="auto")
165
+
166
+ outputs = llm.generate(prompts, sampling_params)
167
+
168
+ # Print the outputs.
169
+ for output in outputs:
170
+ prompt = output.prompt
171
+ generated_text = output.outputs[0].text
172
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
173
+ ```
174
+ <!-- README_AWQ.md-use-from-vllm start -->
175
+
176
+ <!-- README_AWQ.md-use-from-tgi start -->
177
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
178
+
179
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
180
+
181
+ Example Docker parameters:
182
+
183
+ ```shell
184
+ --model-id TheBloke/digital-socrates-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
185
+ ```
186
+
187
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
188
+
189
+ ```shell
190
+ pip3 install huggingface-hub
191
+ ```
192
+
193
+ ```python
194
+ from huggingface_hub import InferenceClient
195
+
196
+ endpoint_url = "https://your-endpoint-url-here"
197
+
198
+ prompt = "Tell me about AI"
199
+ prompt_template=f'''[INST] <<SYS>>
200
+ {system_message}
201
+ <</SYS>>
202
+ {prompt} [/INST]
203
+ '''
204
+
205
+ client = InferenceClient(endpoint_url)
206
+ response = client.text_generation(prompt,
207
+ max_new_tokens=128,
208
+ do_sample=True,
209
+ temperature=0.7,
210
+ top_p=0.95,
211
+ top_k=40,
212
+ repetition_penalty=1.1)
213
+
214
+ print(f"Model output: ", response)
215
+ ```
216
+ <!-- README_AWQ.md-use-from-tgi end -->
217
+
218
+ <!-- README_AWQ.md-use-from-python start -->
219
+ ## Inference from Python code using Transformers
220
+
221
+ ### Install the necessary packages
222
+
223
+ - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
224
+ - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
225
+
226
+ ```shell
227
+ pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
228
+ ```
229
+
230
+ Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
231
+
232
+ If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
233
+
234
+ ```shell
235
+ pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
236
+ ```
237
+
238
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
239
+
240
+ ```shell
241
+ pip3 uninstall -y autoawq
242
+ git clone https://github.com/casper-hansen/AutoAWQ
243
+ cd AutoAWQ
244
+ pip3 install .
245
+ ```
246
+
247
+ ### Transformers example code (requires Transformers 4.35.0 and later)
248
+
249
+ ```python
250
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
251
+
252
+ model_name_or_path = "TheBloke/digital-socrates-7B-AWQ"
253
+
254
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
255
+ model = AutoModelForCausalLM.from_pretrained(
256
+ model_name_or_path,
257
+ low_cpu_mem_usage=True,
258
+ device_map="cuda:0"
259
+ )
260
+
261
+ # Using the text streamer to stream output one token at a time
262
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
263
+
264
+ prompt = "Tell me about AI"
265
+ prompt_template=f'''[INST] <<SYS>>
266
+ {system_message}
267
+ <</SYS>>
268
+ {prompt} [/INST]
269
+ '''
270
+
271
+ # Convert prompt to tokens
272
+ tokens = tokenizer(
273
+ prompt_template,
274
+ return_tensors='pt'
275
+ ).input_ids.cuda()
276
+
277
+ generation_params = {
278
+ "do_sample": True,
279
+ "temperature": 0.7,
280
+ "top_p": 0.95,
281
+ "top_k": 40,
282
+ "max_new_tokens": 512,
283
+ "repetition_penalty": 1.1
284
+ }
285
+
286
+ # Generate streamed output, visible one token at a time
287
+ generation_output = model.generate(
288
+ tokens,
289
+ streamer=streamer,
290
+ **generation_params
291
+ )
292
+
293
+ # Generation without a streamer, which will include the prompt in the output
294
+ generation_output = model.generate(
295
+ tokens,
296
+ **generation_params
297
+ )
298
+
299
+ # Get the tokens from the output, decode them, print them
300
+ token_output = generation_output[0]
301
+ text_output = tokenizer.decode(token_output)
302
+ print("model.generate output: ", text_output)
303
+
304
+ # Inference is also possible via Transformers' pipeline
305
+ from transformers import pipeline
306
+
307
+ pipe = pipeline(
308
+ "text-generation",
309
+ model=model,
310
+ tokenizer=tokenizer,
311
+ **generation_params
312
+ )
313
+
314
+ pipe_output = pipe(prompt_template)[0]['generated_text']
315
+ print("pipeline output: ", pipe_output)
316
+
317
+ ```
318
+ <!-- README_AWQ.md-use-from-python end -->
319
+
320
+ <!-- README_AWQ.md-compatibility start -->
321
+ ## Compatibility
322
+
323
+ The files provided are tested to work with:
324
+
325
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
326
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
327
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
328
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
329
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
330
+
331
+ <!-- README_AWQ.md-compatibility end -->
332
+
333
+ <!-- footer start -->
334
+ <!-- 200823 -->
335
+ ## Discord
336
+
337
+ For further support, and discussions on these models and AI in general, join us at:
338
+
339
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
340
+
341
+ ## Thanks, and how to contribute
342
+
343
+ Thanks to the [chirper.ai](https://chirper.ai) team!
344
+
345
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
346
+
347
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
348
+
349
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
350
+
351
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
352
+
353
+ * Patreon: https://patreon.com/TheBlokeAI
354
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
355
+
356
+ **Special thanks to**: Aemon Algiz.
357
+
358
+ **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
359
+
360
+
361
+ Thank you to all my generous patrons and donaters!
362
+
363
+ And thank you again to a16z for their generous grant.
364
+
365
+ <!-- footer end -->
366
+
367
+ # Original model card: Allen Institute for AI's Digital Socrates 7B
368
+
369
+
370
+ This is the Digital Socrates 7B (DS-7B) model described in our paper: <b>Digital Socrates: Evaluating LLMs through explanation critiques</b> (arXiv link: https://arxiv.org/abs/2311.09613).
371
+
372
+ The recommended, better performing 13B model can be found at https://huggingface.co/allenai/digital-socrates-13b
373
+
374
+ The DS-7B model is a fine-tuned version of [Llama-2-7b-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat), please
375
+ review its [guidelines](https://huggingface.co/meta-llama/Llama-2-7b-chat/blob/main/USE_POLICY.md) and
376
+ [licensing](https://huggingface.co/meta-llama/Llama-2-7b-chat#model-details) before use.
377
+
378
+
379
+
380
+ # What is Digital Socrates?
381
+ Digital Socrates is an open-source, automatic explanation-critiquing model. It is useful for revealing insights about student
382
+ models by examining their reasoning chains. The use of Digital Socrates allows for nuanced, interpretable automatic
383
+ evaluation of explanations without expensive API calls or human annotations.
384
+
385
+ While LLMs can provide reasoned explanations along with their answers, the nature and
386
+ quality of those explanations are still poorly understood. In response, our goal is to define a detailed way of characterizing the
387
+ explanation capabilities of modern models and to create a nuanced, interpretable explanation
388
+ evaluation tool that can generate such characterizations automatically.
389
+
390
+ Given a **question** (together with the multiple-choice options and gold answer),
391
+ ```commandline
392
+ When Dennis operates his lawnmower, he notices the engine makes a lot of noise. He also notices that the engine gets very hot. Which best describes the heat and noise generated from the lawnmower?
393
+ (A) a change in phase
394
+ (B) thermal expansion
395
+ (C) an increase in entropy
396
+ (D) mechanical advantage
397
+
398
+ Answer sheet: (C)
399
+ ```
400
+ as well as a student model explanation and answer,
401
+ ```
402
+ 1) The question states that the lawnmower engine makes a lot of noise.
403
+ 2) The question states that the lawnmower engine gets very hot.
404
+ 3) Noise and heat are both forms of energy.
405
+ 4) The noise and heat generated from the lawnmower are a result of the conversion of energy from the fuel to mechanical energy.
406
+
407
+ Answer: (D)
408
+ ```
409
+
410
+ Digital Socrates gives a **critique of the model-generated explanation** that provides localized feedback on the
411
+ most significant flaw (if any) in the explanation, suggestions to address the identified flaw, as well as a numeric rating indicating the explanation quality:
412
+ ```
413
+ The explanation states or suggests the following:
414
+ * Main flaw (standalone statement): "The noise and heat generated from the lawnmower are a result of the conversion of energy from the fuel to mechanical energy."
415
+ * Dimension: incorrect_information
416
+
417
+ Consider these points for revising the explanation:
418
+ * General: Remember that noise and heat are not forms of energy. They are byproducts of energy conversion.
419
+ * Specific: In this case, the noise and heat generated by the lawnmower are not a result of the conversion of energy from the fuel to mechanical energy. They are byproducts of the combustion process.
420
+
421
+ Explanation score: 2
422
+ ```
423
+
424
+
425
+ Remarkably, despite being orders of magnitude smaller than GPT-4, our Digital Socrates models are
426
+ capable of generating critiques close to GPT-4 critiques in terms of human rating and other
427
+ quantitative measures (correlation of explanation scores given and error category matches).
428
+ Through quantitative and qualitative analysis, we demonstrate how Digital Socrates is useful for
429
+ revealing insights about student models by examining their reasoning chains.
430
+
431
+ We invite you to try out Digital Socrates for your own application!
432
+
433
+
434
+
435
+ # How to use Digital Socrates?
436
+ We provide a quick example of how you can try out Digital Socrates with just a few lines of code:
437
+
438
+ 'DSCritiqueBank-V1' used below can be downloaded from our [dataset page](https://allenai.org/data/digital-socrates).
439
+ ```
440
+ import json
441
+ from transformers import AutoTokenizer, AutoModelForCausalLM
442
+ # Load model and tokenizer
443
+ model_path = "allenai/digital-socrates-7b"
444
+ model = AutoModelForCausalLM.from_pretrained(model_path).to("cuda:0")
445
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
446
+
447
+ # Define input data
448
+ question = "When Dennis operates his lawnmower, he notices the engine makes a lot of noise. He also notices that the engine gets very hot. Which best describes the heat and noise generated from the lawnmower? (A) a change in phase (B) thermal expansion (C) an increase in entropy (D) mechanical advantage"
449
+ explanation = "1) The question states that the lawnmower engine makes a lot of noise.\n2) The question states that the lawnmower engine gets very hot.\n3) Noise and heat are both forms of energy.\n4) The noise and heat generated from the lawnmower are a result of the conversion of energy from the fuel to mechanical energy."
450
+ answerkey = "C"
451
+ predictedanswer = "D"
452
+
453
+ # construct prompt (Llama conventions)
454
+ with open("../DSCritiqueBank-V1/DSCB-prompts.json") as file:
455
+ prompts = json.load(file)
456
+
457
+ system_prompt = prompts['digital_socrates_v1']['system']
458
+ user_prompt = prompts['digital_socrates_v1']['main'].replace("[[QUESTION]]", question).replace("[[EXPLANATION]]", explanation).replace("[[PREDICTEDANSWER]]", predictedanswer).replace("[[ANSWERKEY]]", answerkey)
459
+
460
+ full_prompt = f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>{user_prompt} [/INST]\n\n"
461
+
462
+ # Run model
463
+ input_ids = tokenizer.encode(full_prompt, return_tensors="pt").to("cuda:0")
464
+ output = model.generate(input_ids, max_new_tokens=512, temperature=0)
465
+ res = tokenizer.batch_decode(output, skip_special_tokens=True)
466
+ ```
467
+ Print the output:
468
+ ```
469
+ >>> print(res[0].split("[/INST]")[-1])
470
+
471
+ The explanation states or suggests the following:
472
+ * Main flaw (standalone statement): "The noise and heat generated from the lawnmower are a result of the conversion of energy from the fuel to mechanical energy."
473
+ * Dimension: incorrect_information
474
+
475
+ Consider these points for revising the explanation:
476
+ * General: Remember that noise and heat are not forms of energy. They are byproducts of energy conversion.
477
+ * Specific: In this case, the noise and heat generated by the lawnmower are not a result of the conversion of energy from the fuel to mechanical energy. They are byproducts of the combustion process.
478
+
479
+ Explanation score: 2
480
+ ```
481
+
482
+
483
+
484
+ # More details about Digital Socrates ...
485
+ For more details about Digital Socrates, please refer to our:
486
+ * 📄Paper: https://arxiv.org/abs/2311.09613
487
+ * 💻Dataset: https://allenai.org/data/digital-socrates