MobileLLM-600M-MNN

Introduction

This model is a 4-bit quantized version of the MNN model exported from MobileLLM-600M using llm-export.

Downloads last month
171
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including taobao-mnn/MobileLLM-600M-MNN