quantumaikr commited on
Commit
8c9fb87
1 Parent(s): 026dce6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ ---
7
+ # QuantumLM
8
+
9
+ ## Model Description
10
+
11
+ `QuantumLM` is a Llama2 13B model finetuned on an Wizard-Orca style Dataset
12
+
13
+ ## Usage
14
+
15
+ Start chatting with `QuantumLM` using the following code snippet:
16
+
17
+ ```python
18
+ import torch
19
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
20
+
21
+ tokenizer = AutoTokenizer.from_pretrained("quantumaikr/QuantumLM", use_fast=False)
22
+ model = AutoModelForCausalLM.from_pretrained("quantumaikr/QuantumLM", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
23
+ system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
24
+
25
+ message = "Write me a poem please"
26
+ prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
27
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
28
+ output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
29
+
30
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
31
+ ```
32
+
33
+ QuantumLM should be used with this prompt format:
34
+ ```
35
+ ### System:
36
+ This is a system prompt, please behave and help the user.
37
+
38
+ ### User:
39
+ Your prompt here
40
+
41
+ ### Assistant
42
+ The output of QuantumLM
43
+ ```
44
+
45
+
46
+
47
+ ## Use and Limitations
48
+
49
+ ### Intended Use
50
+
51
+ These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
52
+
53
+ ### Limitations and bias
54
+
55
+ Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
56
+