mradermacher
commited on
auto-patch README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,8 @@ library_name: transformers
|
|
9 |
license: other
|
10 |
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
|
11 |
license_name: falcon-mamba-7b-license
|
12 |
-
no_imatrix:
|
|
|
13 |
quantized_by: mradermacher
|
14 |
---
|
15 |
## About
|
@@ -22,7 +23,6 @@ quantized_by: mradermacher
|
|
22 |
static quants of https://huggingface.co/tiiuae/falcon-mamba-7b-instruct
|
23 |
|
24 |
<!-- provided-files -->
|
25 |
-
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
26 |
## Usage
|
27 |
|
28 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
|
|
9 |
license: other
|
10 |
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
|
11 |
license_name: falcon-mamba-7b-license
|
12 |
+
no_imatrix: 'llama.cpp/ggml/src/ggml-cuda/norm.cu:212: GGML_ASSERT(ggml_is_contiguous(src0))
|
13 |
+
failed'
|
14 |
quantized_by: mradermacher
|
15 |
---
|
16 |
## About
|
|
|
23 |
static quants of https://huggingface.co/tiiuae/falcon-mamba-7b-instruct
|
24 |
|
25 |
<!-- provided-files -->
|
|
|
26 |
## Usage
|
27 |
|
28 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|