maddes8cht commited on
Commit
98278aa
1 Parent(s): 620d7bd

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: stabilityai/stablelm-3b-4e1t
3
+ datasets: Open-Orca/SlimOrca
4
+ tags:
5
+ - stablelm-3b-4e1t
6
+ - instruct
7
+ - finetune
8
+ model-index:
9
+ - name: slimorca-stablelm-3b-4e1t
10
+ results: []
11
+ license: cc-by-sa-4.0
12
+ language:
13
+ - en
14
+ ---
15
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
16
+
17
+ I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
18
+
19
+ # slimorca-stablelm-3b-4e1t - GGUF
20
+ - Model creator: [pansophic](https://huggingface.co/pansophic)
21
+ - Original model: [slimorca-stablelm-3b-4e1t](https://huggingface.co/pansophic/slimorca-stablelm-3b-4e1t)
22
+
23
+ StableLM is a familiy of Models by Stability AI.
24
+
25
+ ## Note:
26
+ Current (as of. 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers.
27
+ The model will crash immediately if -ngl is larger than 34.
28
+ The model works fine however without any gpu acceleration.
29
+
30
+
31
+
32
+ # About GGUF format
33
+
34
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
35
+ A growing list of Software is using it and can therefore use this model.
36
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
37
+
38
+ # Quantization variants
39
+
40
+ There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
41
+
42
+ # Legacy quants
43
+
44
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
45
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
46
+ ## Note:
47
+ Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
48
+ (This mainly refers to Falcon 7b and Starcoder models)
49
+
50
+ # K-quants
51
+
52
+ K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
53
+ So, if possible, use K-quants.
54
+ With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
55
+
56
+
57
+
58
+
59
+ ---
60
+
61
+ # Original Model Card:
62
+ # Model Card for OpenOrca-Phi
63
+
64
+ Full finetuning of the Stability AI's StableLM-3B-4E1T. The model was trained on the SlimOrca dataset. All the samples longer than the context size were removed.
65
+
66
+ <div style="display: flex; justify-content: center;">
67
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/i2fc9OApv6_BgRCKFKg4T.png" alt="orcaslim-stablelm-3b" width="50%" style="display: block; margin: 0 auto;">
68
+ </div>
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ ```python
73
+ import torch
74
+
75
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
76
+
77
+ prompt = """<|im_start|>system
78
+ {system}<|im_end|>
79
+ <|im_start|>user
80
+ {user}<|im_end|>
81
+ <|im_start|>assistant
82
+ """
83
+
84
+ system = "You are an advanced and helpful AI assistant."
85
+ user = "How are you?"
86
+
87
+ prompt = prompt.format(system=system, user=user)
88
+
89
+ model = AutoModelForCausalLM.from_pretrained("pansophic/slimorca-stablelm-3b-4e1t", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
90
+ tokenizer = AutoTokenizer.from_pretrained("pansophic/slimorca-stablelm-3b-4e1t", trust_remote_code=True, torch_dtype=torch.bfloat16)
91
+
92
+ inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
93
+
94
+ streamer = TextStreamer(tokenizer)
95
+
96
+ _ = model.generate(**inputs, max_length=512, top_k=40, top_p=0.9, do_sample=True, temperature=0.55, use_cache=True, streamer=streamer)
97
+ ```
98
+
99
+ ## Prompt formatting
100
+
101
+ The model uses chatML format.
102
+
103
+ ```
104
+ <|im_start|>system
105
+ {system}<|im_end|>
106
+ <|im_start|>user
107
+ {user}<|im_end|>
108
+ <|im_start|>assistant
109
+
110
+ ```
111
+
112
+ ***End of original Model File***
113
+ ---
114
+
115
+
116
+ ## Please consider to support my work
117
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
118
+
119
+ <center>
120
+
121
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
122
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
123
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
124
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
125
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
126
+
127
+ </center>