Tanvir1337
commited on
Commit
•
8e6f021
1
Parent(s):
b0b8b59
init readme contents
Browse files
README.md
CHANGED
@@ -15,3 +15,47 @@ library_name: transformers
|
|
15 |
pipeline_tag: text-generation
|
16 |
quantized_by: Tanvir1337
|
17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
pipeline_tag: text-generation
|
16 |
quantized_by: Tanvir1337
|
17 |
---
|
18 |
+
|
19 |
+
# Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-GGUF
|
20 |
+
|
21 |
+
This model has been quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp/), a high-performance inference engine for large language models.
|
22 |
+
|
23 |
+
## System Prompt Format
|
24 |
+
|
25 |
+
To interact with the model, use the following prompt format:
|
26 |
+
```
|
27 |
+
{System}
|
28 |
+
### Prompt:
|
29 |
+
{User}
|
30 |
+
### Response:
|
31 |
+
```
|
32 |
+
|
33 |
+
## Usage Instructions
|
34 |
+
|
35 |
+
If you're new to using GGUF files, refer to [TheBloke's README](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF) for detailed instructions.
|
36 |
+
|
37 |
+
## Quantization Options
|
38 |
+
|
39 |
+
The following graph compares various quantization types (lower is better):
|
40 |
+
|
41 |
+
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
|
42 |
+
|
43 |
+
For more information on quantization, see [Artefact2's notes](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9).
|
44 |
+
|
45 |
+
## Choosing the Right Model File
|
46 |
+
|
47 |
+
To select the optimal model file, consider the following factors:
|
48 |
+
|
49 |
+
1. **Memory constraints**: Determine how much RAM and/or VRAM you have available.
|
50 |
+
2. **Speed vs. quality**: If you prioritize speed, choose a model that fits within your GPU's VRAM. For maximum quality, consider a model that fits within the combined RAM and VRAM of your system.
|
51 |
+
|
52 |
+
**Quantization formats**:
|
53 |
+
|
54 |
+
* **K-quants** (e.g., Q5_K_M): A good starting point, offering a balance between speed and quality.
|
55 |
+
* **I-quants** (e.g., IQ3_M): Newer and more efficient, but may require specific hardware configurations (e.g., cuBLAS or rocBLAS).
|
56 |
+
|
57 |
+
**Hardware compatibility**:
|
58 |
+
|
59 |
+
* **I-quants**: Not compatible with Vulcan (AMD). If you have an AMD card, ensure you're using the rocBLAS build or a compatible inference engine.
|
60 |
+
|
61 |
+
For more information on the features and trade-offs of each quantization format, refer to the [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix).
|