Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,31 @@ license_name: falcon-llm-license
|
|
9 |
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
|
10 |
---
|
11 |
|
|
|
12 |
|
13 |
-
|
14 |
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
|
10 |
---
|
11 |
|
12 |
+
# Model Card for Falcon3-1B-Instruct-1.58bit-q2b0
|
13 |
|
14 |
+
### Falcon3-1B-1.58 Models
|
15 |
|
16 |
+
The **Falcon3-1B-1.58bit-q2b0** is a quantized version of **Falcon3-1B-Instruct**, leveraging the **q2b0 quantization method** from Candle. This enables extreme compression while maintaining strong performance across various NLP tasks.
|
17 |
|
18 |
+
## Model Details
|
19 |
|
20 |
+
### Model Sources
|
21 |
+
|
22 |
+
- **Repository:** [tiiuae/Falcon3-1B-Instruct](https://huggingface.co/tiiuae/Falcon3-1B-Instruct)
|
23 |
+
- **Quantization PR:** [Candle q2b0 Quantization](https://github.com/huggingface/candle/pull/2683)
|
24 |
+
|
25 |
+
## Quantization Details
|
26 |
+
|
27 |
+
The model has been quantized using the **q2b0** method from Candle. This approach reduces model size significantly while preserving performance. For more details on this quantization technique, refer to the [Candle PR #2683](https://github.com/huggingface/candle/pull/2683).
|
28 |
+
|
29 |
+
## Training Details
|
30 |
+
|
31 |
+
For details on the dataset and training process, refer to the original [Falcon3-1B-Instruct repository](https://huggingface.co/tiiuae/Falcon3-1B-Instruct).
|
32 |
+
|
33 |
+
## License
|
34 |
+
|
35 |
+
This model is licensed under the [Falcon LLM License](https://falconllm.tii.ae/falcon-terms-and-conditions.html).
|
36 |
+
|
37 |
+
---
|
38 |
+
|
39 |
+
For additional information or questions, please refer to the main [Falcon3-1B-Instruct repository](https://huggingface.co/tiiuae/Falcon3-1B-Instruct).
|