RichardErkhov commited on
Commit
2b04206
1 Parent(s): 160f17c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Saul-Instruct-v1 - GGUF
11
+ - Model creator: https://huggingface.co/Equall/
12
+ - Original model: https://huggingface.co/Equall/Saul-Instruct-v1/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Saul-Instruct-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [Saul-Instruct-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [Saul-Instruct-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [Saul-Instruct-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [Saul-Instruct-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [Saul-Instruct-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [Saul-Instruct-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [Saul-Instruct-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [Saul-Instruct-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [Saul-Instruct-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [Saul-Instruct-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [Saul-Instruct-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [Saul-Instruct-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [Saul-Instruct-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [Saul-Instruct-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [Saul-Instruct-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [Saul-Instruct-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [Saul-Instruct-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [Saul-Instruct-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [Saul-Instruct-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [Saul-Instruct-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-Instruct-v1-gguf/blob/main/Saul-Instruct-v1.Q6_K.gguf) | Q6_K | 5.53GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ library_name: transformers
45
+ tags:
46
+ - legal
47
+ license: mit
48
+ language:
49
+ - en
50
+ ---
51
+
52
+ # Equall/Saul-Instruct-v1
53
+
54
+ This is the instruct model for Equall/Saul-Instruct-v1, a large instruct language model tailored for Legal domain. This model is obtained by continue pretraining of Mistral-7B.
55
+
56
+ Checkout our website and register https://equall.ai/
57
+
58
+
59
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a900e3a619fe72b14af0f/OU4Y3s-WckYKMN4fQkNiS.png)
60
+
61
+ ## Model Details
62
+
63
+ ### Model Description
64
+
65
+ <!-- Provide a longer summary of what this model is. -->
66
+
67
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
68
+
69
+ - **Developed by:** Equall.ai in collaboration with CentraleSupelec, Sorbonne Université, Instituto Superior Técnico and NOVA School of Law
70
+ - **Model type:** 7B
71
+ - **Language(s) (NLP):** English
72
+ - **License:** MIT
73
+
74
+ ### Model Sources
75
+
76
+ <!-- Provide the basic links for the model. -->
77
+
78
+ - **Paper:** https://arxiv.org/abs/2403.03883
79
+
80
+ ## Uses
81
+
82
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
83
+ You can use it for legal use cases that involves generation.
84
+
85
+ Here's how you can run the model using the pipeline() function from 🤗 Transformers:
86
+
87
+ ```python
88
+
89
+ # Install transformers from source - only needed for versions <= v4.34
90
+ # pip install git+https://github.com/huggingface/transformers.git
91
+ # pip install accelerate
92
+
93
+ import torch
94
+ from transformers import pipeline
95
+
96
+ pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
97
+ # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
98
+ messages = [
99
+ {"role": "user", "content": "[YOUR QUERY GOES HERE]"},
100
+ ]
101
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
102
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
103
+ print(outputs[0]["generated_text"])
104
+ ```
105
+
106
+ ## Bias, Risks, and Limitations
107
+
108
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
109
+
110
+
111
+ This model is built upon the technology of LLM, which comes with inherent limitations. It may occasionally generate inaccurate or nonsensical outputs. Furthermore, being a 7B model, it's anticipated to exhibit less robust performance compared to larger models, such as the 70B variant.
112
+
113
+ ## Citation
114
+
115
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
116
+
117
+ **BibTeX:**
118
+
119
+
120
+ ```bibtex
121
+ @misc{colombo2024saullm7b,
122
+ title={SaulLM-7B: A pioneering Large Language Model for Law},
123
+ author={Pierre Colombo and Telmo Pessoa Pires and Malik Boudiaf and Dominic Culver and Rui Melo and Caio Corro and Andre F. T. Martins and Fabrizio Esposito and Vera Lúcia Raposo and Sofia Morgado and Michael Desa},
124
+ year={2024},
125
+ eprint={2403.03883},
126
+ archivePrefix={arXiv},
127
+ primaryClass={cs.CL}
128
+ }
129
+ ```
130
+