Edit model card

mlc-chat-neural-chat-7b-v3-1-q3f16_1

An MLC-compiled version of Neural Chat 7B v3 quantized to q3f16 for running locally on mobile devices.

Requires a build of MLC Chat for iOS that supports Mistral. As of 12/12/23, that means you will need to build MLC Chat for iOS from source at mlc-llm @ MLC.ai on Github.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .