mlc-chat-neural-chat-7b-v3-1-q3f16_1
An MLC-compiled version of Neural Chat 7B v3 quantized to q3f16 for running locally on mobile devices.
Requires a build of MLC Chat for iOS that supports Mistral. As of 12/12/23, that means you will need to build MLC Chat for iOS from source at mlc-llm @ MLC.ai on Github.