Running in the browser?

#16
by BoscoTheDog - opened

The modelcard mentions how it could run on a wide range of devices using ONNX.

I'm currrently waiting for llama.cpp to support this model and then try and run it in the browser through Wllama. Or perhaps WebLLM could support it, or even Transformers.js, as this model seems tailor-made for the browser-based use case (kudos!).

But I was wondering if you yourselves have created an example implementation of using this model (especially the 128K version...) in a webbrowser?

Microsoft org

You read our mind! Stay tuned we will keep you posted.

parinitarahi changed discussion status to closed

You can modify the MODELS object in index.html of Candle Phi WASM demo to include Phi-3 mini.

Remember to use a proper prompt template: <|user|> {{Prompt}} <|end|> <|assistant|>

@rugbysta I tried to do that as per your suggestion, but that project uses .GGUF files, not ONNX. And it requires a tokenizer and other files, which are not available on https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/tree/main. Let alone for a 128K context version.

As far as I can tell Llama.cpp only just released a version of it's tool to create GGUF files which can even generate .gguf files for Phi3. But llama.cpp doesn't support the 128K context version yet.

@parinitarahi You closed the discussion? Am I missing something?

Sign up or log in to comment