laserxtral-GGUF / README.md
Kearm's picture
Update README.md
002925f verified
|
raw
history blame
1.46 kB
---
license: cc-by-nc-2.0
pipeline_tag: text-generation
inference: false
library_name: transformers
---
## GGUF Quantizations of laserxtral
Join Our Discord! https://discord.gg/vT3sktQ3zb
This repository contains GGUF format model files for our laserxtral model.
*To Manual Downloaders* You will generally not want to download the whole repository. Select the file name from the list below and use your downloader of choice.
## Provided files
| Name | Quant method | Bits | Size |
| ---- | ---- | ---- | ---- |
| [cognitivecomputations_laserxtral-Q2_K.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q2_K.gguf) | Q2_K | 2 | 8.8 GB |
| [cognitivecomputations_laserxtral-Q3_K_M.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q3_K_M.gguf) | Q3_K_M | 3 | 11.6 GB |
| [cognitivecomputations_laserxtral-Q4_K_M.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q4_K_M.gguf) | Q4_K_M | 4 | 14.6 GB |
| [cognitivecomputations_laserxtral-Q5_K_M.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q5_K_M.gguf) | Q5_K_M | 5 | 17.1 GB |
| [cognitivecomputations_laserxtral-Q6_K.gguf](https://huggingface.co/cognitivecomputations/laserxtral-GGUF/blob/main/cognitivecomputations_laserxtral-Q6_K.gguf) | Q6_K | 6 | 19.8 GB |