Poro-34B-gguf
This is a GGUF quantization of the Poro-34B model.
Please refer to that repository's model card for details.
The current revision is a quantization of the 1000B token checkpoint.
The conversion was done with llama.cpp version b2354 (e25fb4b18fcedb9bed6be4585cf842e9a669b28b) on a Google Compute machine generously sponsored by Valohai.
- Downloads last month
- 26