Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,11 @@ library_name: transformers
|
|
7 |
license: apache-2.0
|
8 |
quantized_by: mradermacher
|
9 |
---
|
|
|
|
|
|
|
|
|
|
|
10 |
## About
|
11 |
|
12 |
<!-- ### quantize_version: 2 -->
|
@@ -58,8 +63,6 @@ questions you might have and/or if you want some other model quantized.
|
|
58 |
|
59 |
## Thanks
|
60 |
|
61 |
-
|
62 |
-
me use its servers and providing upgrades to my workstation to enable
|
63 |
-
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
64 |
|
65 |
<!-- end -->
|
|
|
7 |
license: apache-2.0
|
8 |
quantized_by: mradermacher
|
9 |
---
|
10 |
+
|
11 |
+
> [!Important]
|
12 |
+
> Big thanks to [@mradermacher](https://huggingface.co/mradermacher) for helping us build this repository of GGUFs for our [Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat)!
|
13 |
+
|
14 |
+
|
15 |
## About
|
16 |
|
17 |
<!-- ### quantize_version: 2 -->
|
|
|
63 |
|
64 |
## Thanks
|
65 |
|
66 |
+
Big thanks to [@mradermacher](https://huggingface.co/mradermacher) for helping us build this repository of GGUFs for our [Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat)!
|
|
|
|
|
67 |
|
68 |
<!-- end -->
|