JackFram's llama-160m for Web-LLM
This is a compiled version of JackFram/llama-160m for MLC Web-LLM, using q4f32_1
quantization.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
This is a compiled version of JackFram/llama-160m for MLC Web-LLM, using q4f32_1
quantization.