TheBloke commited on
Commit
c9d927b
1 Parent(s): e30b305

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +388 -0
README.md ADDED
@@ -0,0 +1,388 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - en
5
+ - pl
6
+ license: llama2
7
+ model_creator: Voicelab
8
+ model_link: https://huggingface.co/Voicelab/trurl-2-7b
9
+ model_name: Trurl 2 7B
10
+ model_type: llama
11
+ pipeline_tag: text-generation
12
+ quantized_by: TheBloke
13
+ tags:
14
+ - voicelab
15
+ - pytorch
16
+ - llama-2
17
+ - trurl
18
+ - trurl-2
19
+ ---
20
+
21
+ <!-- header start -->
22
+ <!-- 200823 -->
23
+ <div style="width: auto; margin-left: auto; margin-right: auto">
24
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
25
+ </div>
26
+ <div style="display: flex; justify-content: space-between; width: 100%;">
27
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
28
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
29
+ </div>
30
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
31
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
32
+ </div>
33
+ </div>
34
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
35
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
36
+ <!-- header end -->
37
+
38
+ # Trurl 2 7B - GPTQ
39
+ - Model creator: [Voicelab](https://huggingface.co/Voicelab)
40
+ - Original model: [Trurl 2 7B](https://huggingface.co/Voicelab/trurl-2-7b)
41
+
42
+ ## Description
43
+
44
+ This repo contains GPTQ model files for [Voicelab's Trurl 2 7B](https://huggingface.co/Voicelab/trurl-2-7b).
45
+
46
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
47
+
48
+ ## Repositories available
49
+
50
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ)
51
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Trurl-2-7B-GGML)
52
+ * [Voicelab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Voicelab/trurl-2-7b)
53
+
54
+ ## Prompt template: Llama-2-Chat
55
+
56
+ ```
57
+ [INST] <<SYS>>
58
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
59
+ <</SYS>>
60
+ {prompt}[/INST]
61
+ ```
62
+
63
+ ## Provided files and GPTQ parameters
64
+
65
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
66
+
67
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
68
+
69
+ All GPTQ files are made with AutoGPTQ.
70
+
71
+ <details>
72
+ <summary>Explanation of GPTQ parameters</summary>
73
+
74
+ - Bits: The bit size of the quantised model.
75
+ - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
76
+ - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have issues with models that use Act Order plus Group Size.
77
+ - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
78
+ - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
79
+ - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
80
+ - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
81
+
82
+ </details>
83
+
84
+ | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
85
+ | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
86
+ | [main](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
87
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
88
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
89
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
90
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
91
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
92
+
93
+ ## How to download from branches
94
+
95
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Trurl-2-7B-GPTQ:gptq-4bit-32g-actorder_True`
96
+ - With Git, you can clone a branch with:
97
+ ```
98
+ git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Trurl-2-7B-GPTQ
99
+ ```
100
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
101
+
102
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
103
+
104
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
105
+
106
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
107
+
108
+ 1. Click the **Model tab**.
109
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Trurl-2-7B-GPTQ`.
110
+ - To download from a specific branch, enter for example `TheBloke/Trurl-2-7B-GPTQ:gptq-4bit-32g-actorder_True`
111
+ - see Provided Files above for the list of branches for each option.
112
+ 3. Click **Download**.
113
+ 4. The model will start downloading. Once it's finished it will say "Done"
114
+ 5. In the top left, click the refresh icon next to **Model**.
115
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Trurl-2-7B-GPTQ`
116
+ 7. The model will automatically load, and is now ready for use!
117
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
118
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
119
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
120
+
121
+ ## How to use this GPTQ model from Python code
122
+
123
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) 0.3.1 or later installed:
124
+
125
+ ```
126
+ pip3 install auto-gptq
127
+ ```
128
+
129
+ If you have problems installing AutoGPTQ, please build from source instead:
130
+ ```
131
+ pip3 uninstall -y auto-gptq
132
+ git clone https://github.com/PanQiWei/AutoGPTQ
133
+ cd AutoGPTQ
134
+ pip3 install .
135
+ ```
136
+
137
+ Then try the following example code:
138
+
139
+ ```python
140
+ from transformers import AutoTokenizer, pipeline, logging
141
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
142
+
143
+ model_name_or_path = "TheBloke/Trurl-2-7B-GPTQ"
144
+
145
+ use_triton = False
146
+
147
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
148
+
149
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
150
+ use_safetensors=True,
151
+ trust_remote_code=False,
152
+ device="cuda:0",
153
+ use_triton=use_triton,
154
+ quantize_config=None)
155
+
156
+ """
157
+ # To download from a specific branch, use the revision parameter, as in this example:
158
+ # Note that `revision` requires AutoGPTQ 0.3.1 or later!
159
+
160
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
161
+ revision="gptq-4bit-32g-actorder_True",
162
+ use_safetensors=True,
163
+ trust_remote_code=False,
164
+ device="cuda:0",
165
+ quantize_config=None)
166
+ """
167
+
168
+ prompt = "Tell me about AI"
169
+ prompt_template=f'''[INST] <<SYS>>
170
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
171
+ <</SYS>>
172
+ {prompt}[/INST]
173
+ '''
174
+
175
+ print("\n\n*** Generate:")
176
+
177
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
178
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
179
+ print(tokenizer.decode(output[0]))
180
+
181
+ # Inference can also be done using transformers' pipeline
182
+
183
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
184
+ logging.set_verbosity(logging.CRITICAL)
185
+
186
+ print("*** Pipeline:")
187
+ pipe = pipeline(
188
+ "text-generation",
189
+ model=model,
190
+ tokenizer=tokenizer,
191
+ max_new_tokens=512,
192
+ temperature=0.7,
193
+ top_p=0.95,
194
+ repetition_penalty=1.15
195
+ )
196
+
197
+ print(pipe(prompt_template)[0]['generated_text'])
198
+ ```
199
+
200
+ ## Compatibility
201
+
202
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
203
+
204
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
205
+
206
+ <!-- footer start -->
207
+ <!-- 200823 -->
208
+ ## Discord
209
+
210
+ For further support, and discussions on these models and AI in general, join us at:
211
+
212
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
213
+
214
+ ## Thanks, and how to contribute.
215
+
216
+ Thanks to the [chirper.ai](https://chirper.ai) team!
217
+
218
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
219
+
220
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
221
+
222
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
223
+
224
+ * Patreon: https://patreon.com/TheBlokeAI
225
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
226
+
227
+ **Special thanks to**: Aemon Algiz.
228
+
229
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
230
+
231
+
232
+ Thank you to all my generous patrons and donaters!
233
+
234
+ And thank you again to a16z for their generous grant.
235
+
236
+ <!-- footer end -->
237
+
238
+ # Original model card: Voicelab's Trurl 2 7B
239
+
240
+ <img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>
241
+
242
+
243
+ # Trurl 2 -- Polish Llama 2
244
+
245
+ The new OPEN TRURL is a finetuned Llama 2, trained on over 1.7b tokens (970k conversational **Polish** and **English** samples) with a large context of 4096 tokens.
246
+ TRURL was trained on a large number of Polish data.
247
+ TRURL 2 is a collection of fine-tuned generative text models with 7 billion and 13 billion parameters.
248
+ This is the repository for the 7b fine-tuned model, optimized for dialogue use cases.
249
+
250
+
251
+ # Overview
252
+
253
+ **TRURL developers** Voicelab.AI
254
+
255
+ **Variations** Trurl 2 comes in 7B and 13B versions.
256
+
257
+ **Input** Models input text only.
258
+
259
+ **Output** Models generate text only.
260
+
261
+ **Model Architecture** Trurl is an auto-regressive language model that uses an optimized transformer architecture.
262
+
263
+ ||Training Data|Params|Content Length|Num. Samples|Num. Tokens|start LR|
264
+ |---|---|---|---|---|---|---|
265
+ |Trurl 2|*A new mix of private and publicly available online data*|7B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>|
266
+ |Trurl 2|*A new mix of private and publicly available online data*|13B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>|
267
+
268
+ ## Training data
269
+
270
+ The training data includes Q&A pairs from various sources including Alpaca comparison data with GPT, Falcon comparison data, Dolly 15k, Oasst1, Phu saferlfhf, ShareGPT version 2023.05.08v0 filtered and cleaned, Voicelab private datasets for JSON data extraction, modification, and analysis, CURLICAT dataset containing journal entries, dataset from Polish wiki with Q&A pairs grouped into conversations, Voicelab private dataset with sales conversations, arguments and objections, paraphrases, contact reason detection, and corrected dialogues.
271
+
272
+ ## Intended Use
273
+
274
+ Trurl 2 is intended for commercial and research use in Polish and English. Tuned models are intended for assistant-like chat, but also adapted for a variety of natural language generation tasks.
275
+
276
+ # Evaluation Results
277
+ |Model | Size| hellaswag | arc_challenge | MMLU|
278
+ |---|---|---|---|---|
279
+ | Llama-2-chat | 7B | 78.55% | 52.9% | 48.32% |
280
+ | Llama-2-chat | 13B | 81.94% | 59.04% | 54.64% |
281
+ | Trurl 2.0 (with MMLU) | 13B | 80.09% | 59.30% | 78.35% |
282
+ | Trurl 2.0 (no MMLU) | 13B | TO-DO | TO-DO | TO-DO|
283
+ | Trurl 2.0 (no MMLU) | 7b | 75.29% | 53.41%| 50.0%|
284
+
285
+
286
+ <img src="https://voicelab.ai/wp-content/uploads/trurl-hero.webp" alt="trurl graphic" style="width:100px;"/>
287
+
288
+ # Ethical Considerations and Limitations
289
+ Trurl 2, same as a Llama 2, is a new technology that carries risks with use. Testing conducted to date has been in Polish and English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Trurl 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Trurl 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
290
+
291
+ Please see the Meta's Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
292
+
293
+ # Example use
294
+ ## LLM
295
+ Simply pass a prompt to a model and decode an output. Model will continue writing text based on sample you provided.
296
+ ```
297
+ import torch
298
+ from transformers import LlamaForCausalLM, LlamaTokenizer
299
+
300
+ tokenizer = LlamaTokenizer.from_pretrained("Voicelab/trurl-2-7b")
301
+ model = LlamaForCausalLM.from_pretrained("Voicelab/trurl-2-7b")
302
+
303
+ prompt = "Yesterday, when I was"
304
+
305
+ tokenized_prompt = tokenizer(prompt, return_tensors="pt")
306
+
307
+ model.eval()
308
+ with torch.no_grad():
309
+ print(tokenizer.decode(
310
+ model.generate(**tokenized_prompt, max_new_tokens=200)[0],
311
+ skip_special_tokens=True))
312
+ ```
313
+ Generated output:
314
+ > Yesterday, when I was in the city, I saw a man who was walking his dog. and the dog was wearing a little sweater. I thought it was so cute! I wish I had a dog so I could get one of those sweaters for my own dog.
315
+
316
+ ## Chat
317
+ When using TRURL in a chat mode you should remember to use Llama 2 conversation template like in the example below.
318
+
319
+
320
+ ```
321
+ import torch
322
+ from transformers import LlamaForCausalLM, LlamaTokenizer
323
+
324
+ tokenizer = LlamaTokenizer.from_pretrained("Voicelab/trurl-2-7b")
325
+ model = LlamaForCausalLM.from_pretrained("Voicelab/trurl-2-7b")
326
+
327
+ prompt = """
328
+ <s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.
329
+ Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.
330
+ Please ensure that your responses are socially unbiased and positive in nature.\n\n
331
+ If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.
332
+ If you don't know the answer to a question, please don't share false information. <</SYS>>
333
+
334
+ What was the reason for calling in the conversation below? \n\n
335
+ AGENT: Hello, Bank of Albion, this is Mata Hari. How can I help you?
336
+ CLIENT: Hi. I've been locked out from my Internet account. I need your help.
337
+ AGENT: (yy) Yes, of course, I'll do my best to help you. But I need to find out why the locking-out happened. (yy) In order to ascertain that, I'll ask you a couple of questions to confirm your identity. I'm going to need your full name.
338
+ CLIENT: Lizz Truss.
339
+ AGENT: Thank you. Now I need your personal identification number.
340
+ CLIENT: Fourteen, two hundred thirty-one, thirty-eight, twenty-nine, sixty-five.
341
+ AGENT: Thank you. Now I need your client ID number. The client ID number is the eight digits we assigned to you at the very beginning, on conclusion of the contract.
342
+ CLIENT: OK. Give me a moment. I have to find it.
343
+ AGENT: (mhm) You'll find… You'll find it in the contract.
344
+ CLIENT: Yes, yes. I can see it. Sixty-five, twenty-nine, thirty-eight, thirty-one.
345
+ AGENT: Thank you. One final security question. Do you have any deposits in our bank?
346
+ CLIENT: No, no. I don't have any deposits in this bank.
347
+ AGENT: Thank you. Your identity has been (yy) confirmed. (yy) I can see that the account has been blocked, indeed, and you won't be able to log in via the Internet (yy) because (yy) the identity document which is listed for reference has expired. (yy) From what I can see, your identity document expired some time ago. Have you been issued a new one?
348
+ CLIENT: Well, no. I think my ID is still valid, you know. I didn't even know.
349
+ AGENT: Well, no... Your ID expired at the end of March. Well, almost at the end. Your old ID had been valid until 26 March. (yy) For that reason, your accout has been blocked, because you haven't notified us about the ID change for a few months. We are not interested if the ID document has been officialy reissued. (...) On our end, what matters is whether the document listed for our reference is valid (yy) so without a valid document I can't unlock your accout.
350
+ CLIENT: But I have to carry out an operation right now, so this is sort of problematic.
351
+ AGENT: I understand. But (yy) you are obligated, as an account holder, to notify the bank about any changes pending (yy), regrding, for example, your home address or phone number. Now, one of such safeguards protecting your… (yy) money, your sensitive data, is precisely about having a valid identification document. Since this is missing in your case, the account has been blocked. Now, I don't think this would have caught you off guard, because we always remind our customers that their ID is about to expire. When the ID is nearing expiration, we display relevant messages at least sixty days in advance. They appear once you've logged in, at the very top of the screen, there is a notification that (yy) the ID is about to expire (yy), so, well... The bank did notify you about this issue. Now, how you chose to act on this information was your choice, right? In any case, at this point, in order to unlock your accout, our protocols require that you produce a new identification document at one of our branches. You shall provide information concerning the new document number, new valid-thru date, and only then will you be able to use your account again. I can schedule an appointment with a consultant at our branch for you. What locality would you prefer?
352
+ CLIENT: Well, I'm not sure if I should share such information with you.
353
+ AGENT: And may I ask why exactly you are unsure? After all, you're calling a bank that runs your account, right?
354
+ CLIENT: Right, you know what, I need to go now. Good bye.
355
+ AGENT: (yy) Miss… [/INST]
356
+
357
+ """
358
+
359
+ tokenized_prompt = tokenizer(prompt, return_tensors="pt")
360
+
361
+ model.eval()
362
+ with torch.no_grad():
363
+ print(tokenizer.decode(
364
+ model.generate(**tokenized_prompt, max_new_tokens=200)[0],
365
+ skip_special_tokens=True))
366
+ ```
367
+
368
+ Generated output:
369
+ > The reason for calling in this conversation is for the agent to help the client regain access to their internet account, which has been locked due to an expired identification document. The agent asks for the client's personal information to confirm their identity and then informs them that their account has been blocked because they have not notified the bank about the ID change for a few months. The agent explains that the bank has displayed relevant messages about the ID expiring and that the client must produce a new identification document at one of their branches in order to unlock their account. The client expresses uncertainty about sharing their information with the agent, but ultimately decides to end the call.
370
+
371
+ To get the expected features and performance for the chat versions, a specific Llama 2 formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
372
+
373
+ # Authors
374
+
375
+ The model was trained by NLP Research Team at Voicelab.ai.
376
+
377
+ You can contact us [here](https://voicelab.ai/contact/).
378
+
379
+ * [TRURL 13b](https://huggingface.co/Voicelab/trurl-2-13b/)
380
+ * [TRURL 7b](https://huggingface.co/Voicelab/trurl-2-7b/)
381
+ * [TRURL DEMO](https://trurl.ai)
382
+
383
+ Quantized models:
384
+ * [TRURL 13b - 8bit](https://huggingface.co/Voicelab/trurl-2-13b-8bit/)
385
+ * [TRURL 7b - 8bit](https://huggingface.co/Voicelab/trurl-2-7b-8bit/)
386
+
387
+ The work was supported by [#NASK](https://www.nask.pl/)
388
+