suraiy's picture
Upload README.md with huggingface_hub
f3320e0 verified
metadata
tags:
  - pruna-ai
  - llama-cpp
  - gguf-my-repo
base_model: microsoft/Phi-3-mini-128k-instruct
metrics:
  - memory_disk
  - memory_inference
  - inference_latency
  - inference_throughput
  - inference_CO2_emissions
  - inference_energy_consumption
thumbnail: >-
  https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg

suraiy/microsoft-Phi-3-mini-128k-instruct-HQQ-4bit-smashed-Q4_K_M-GGUF

This model was converted to GGUF format from PrunaAI/microsoft-Phi-3-mini-128k-instruct-HQQ-4bit-smashed using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI. CLI:

llama-cli --hf-repo suraiy/microsoft-Phi-3-mini-128k-instruct-HQQ-4bit-smashed-Q4_K_M-GGUF --model microsoft-phi-3-mini-128k-instruct-hqq-4bit-smashed-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo suraiy/microsoft-Phi-3-mini-128k-instruct-HQQ-4bit-smashed-Q4_K_M-GGUF --model microsoft-phi-3-mini-128k-instruct-hqq-4bit-smashed-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m microsoft-phi-3-mini-128k-instruct-hqq-4bit-smashed-q4_k_m.gguf -n 128