Quantized sarashina2.2-1b-instruct-v0.1 (AWQ)
This is a 4-bit AWQ quantized version of sbintuitions/sarashina2.2-1b-instruct-v0.1
.
blog(japanese)
https://zenn.dev/sinchir0/articles/ad41d487d3a52b
License
The original model is licensed under the MIT License. The license and copyright notice from the original model are retained in this distribution.
See LICENSE
for details.
Changes
- Converted to AWQ INT4 format using AutoAWQ.
- Tested with
autoawq>=0.1.6
.
Disclaimer
This quantized model is provided as-is, with no guarantees of accuracy or safety. Use at your own risk.
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for sinchir0/sarashina2.2-1b-instruct-v0.1-awq
Base model
sbintuitions/sarashina2.2-1b