RichardErkhov
commited on
Commit
•
dfb56be
1
Parent(s):
356264f
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
ChemWiz_16bit - GGUF
|
11 |
+
- Model creator: https://huggingface.co/dbands/
|
12 |
+
- Original model: https://huggingface.co/dbands/ChemWiz_16bit/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [ChemWiz_16bit.Q2_K.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q2_K.gguf) | Q2_K | 2.81GB |
|
18 |
+
| [ChemWiz_16bit.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
|
19 |
+
| [ChemWiz_16bit.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.IQ3_S.gguf) | IQ3_S | 3.26GB |
|
20 |
+
| [ChemWiz_16bit.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
|
21 |
+
| [ChemWiz_16bit.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.IQ3_M.gguf) | IQ3_M | 3.33GB |
|
22 |
+
| [ChemWiz_16bit.Q3_K.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q3_K.gguf) | Q3_K | 3.55GB |
|
23 |
+
| [ChemWiz_16bit.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
|
24 |
+
| [ChemWiz_16bit.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
|
25 |
+
| [ChemWiz_16bit.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.IQ4_XS.gguf) | IQ4_XS | 2.25GB |
|
26 |
+
| [ChemWiz_16bit.Q4_0.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q4_0.gguf) | Q4_0 | 4.13GB |
|
27 |
+
| [ChemWiz_16bit.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
|
28 |
+
| [ChemWiz_16bit.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
|
29 |
+
| [ChemWiz_16bit.Q4_K.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q4_K.gguf) | Q4_K | 4.36GB |
|
30 |
+
| [ChemWiz_16bit.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
|
31 |
+
| [ChemWiz_16bit.Q4_1.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q4_1.gguf) | Q4_1 | 4.54GB |
|
32 |
+
| [ChemWiz_16bit.Q5_0.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q5_0.gguf) | Q5_0 | 4.95GB |
|
33 |
+
| [ChemWiz_16bit.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
|
34 |
+
| [ChemWiz_16bit.Q5_K.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q5_K.gguf) | Q5_K | 5.07GB |
|
35 |
+
| [ChemWiz_16bit.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
|
36 |
+
| [ChemWiz_16bit.Q5_1.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q5_1.gguf) | Q5_1 | 5.36GB |
|
37 |
+
| [ChemWiz_16bit.Q6_K.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q6_K.gguf) | Q6_K | 5.82GB |
|
38 |
+
| [ChemWiz_16bit.Q8_0.gguf](https://huggingface.co/RichardErkhov/dbands_-_ChemWiz_16bit-gguf/blob/main/ChemWiz_16bit.Q8_0.gguf) | Q8_0 | 7.54GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
datasets:
|
46 |
+
- Vezora/Open-Critic-GPT
|
47 |
+
- dbands/ChemistryCoder
|
48 |
+
- iamtarun/python_code_instructions_18k_alpaca
|
49 |
+
- AI-MO/NuminaMath-CoT
|
50 |
+
- AdaptLLM/med_knowledge_prob
|
51 |
+
pipeline_tag: text-generation
|
52 |
+
---
|
53 |
+
2024-08-05: Use the following prompting to get the best out of this model:
|
54 |
+
|
55 |
+
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
56 |
+
|
57 |
+
### Instruction:
|
58 |
+
{}
|
59 |
+
|
60 |
+
### Input:
|
61 |
+
{}
|
62 |
+
|
63 |
+
### Response:
|
64 |
+
{}"""
|
65 |
+
|
66 |
+
The model will return the Response.
|
67 |
+
|
68 |
+
|
69 |
+
|
70 |
+
2024-08-01: This model is still making up chemical SMILE combinations, I will resolve this through fine tuning. I have also started training the model on mathimatical reasoning.
|
71 |
+
This model makes stuff up, lots of stuff. I do like the fact that the model creates working code though.
|
72 |
+
|
73 |
+
2024-08-01: I have now started chaning this model to be able to create chemistry based code suitable to be used in RDKit. I used a small data set so as to perform a proof of concept.
|
74 |
+
|
75 |
+
This model is highly experimental, do not use it in production scenarios yet.
|
76 |
+
|
77 |
+
2024-07-27
|
78 |
+
This is a test model to create a plan to create code that can run in RDKit to simulate chemical reactions. I have limited the outputs to only creating the plan to implement the code, not the coding itself. This model is only intended for researchers, none of the outputs must be used in the real world, as these models can halucinante and create outcomes with unpredictable outcomes.
|
79 |
+
|
80 |
+
|
81 |
+
---
|
82 |
+
base_model: dbands/tantrum_16bit
|
83 |
+
language:
|
84 |
+
- en
|
85 |
+
license: apache-2.0
|
86 |
+
tags:
|
87 |
+
- text-generation-inference
|
88 |
+
- transformers
|
89 |
+
- unsloth
|
90 |
+
- qwen2
|
91 |
+
- trl
|
92 |
+
---
|
93 |
+
|
94 |
+
# Uploaded model
|
95 |
+
|
96 |
+
- **Developed by:** dbands
|
97 |
+
- **License:** apache-2.0
|
98 |
+
- **Finetuned from model :** dbands/tantrum_16bit
|
99 |
+
|
100 |
+
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
101 |
+
|
102 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
103 |
+
|