duyntnet commited on
Commit
783ef08
1 Parent(s): 8436088

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - Saul-Instruct-v1
12
+ ---
13
+ Quantizations of https://huggingface.co/Equall/Saul-Instruct-v1
14
+
15
+ Note: not sure why but Q2_K, Q3_K_S, Q4_0 and Q5_0 gave error during quantizations: "ggml_validate_row_data: found nan value at block xxx", so I skipped those quants.
16
+
17
+ # From original readme
18
+
19
+ ## Uses
20
+
21
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
22
+ You can use it for legal use cases that involves generation.
23
+
24
+ Here's how you can run the model using the pipeline() function from 🤗 Transformers:
25
+
26
+ ```python
27
+
28
+ # Install transformers from source - only needed for versions <= v4.34
29
+ # pip install git+https://github.com/huggingface/transformers.git
30
+ # pip install accelerate
31
+
32
+ import torch
33
+ from transformers import pipeline
34
+
35
+ pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
36
+ # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
37
+ messages = [
38
+ {"role": "user", "content": "[YOUR QUERY GOES HERE]"},
39
+ ]
40
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
41
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
42
+ print(outputs[0]["generated_text"])
43
+ ```