suraiy's picture
Upload README.md with huggingface_hub
5881e25 verified
|
raw
history blame
1.6 kB
metadata
language:
  - multilingual
license: mit
tags:
  - nlp
  - code
  - llama-cpp
  - gguf-my-repo
license_link: >-
  https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
  parameters:
    temperature: 0.7
widget:
  - messages:
      - role: user
        content: Can you provide ways to eat combinations of bananas and dragonfruits?

suraiy/Phi-3-medium-128k-instruct-Q4_K_M-GGUF

This model was converted to GGUF format from microsoft/Phi-3-medium-128k-instruct using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI. CLI:

llama-cli --hf-repo suraiy/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --model phi-3-medium-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo suraiy/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --model phi-3-medium-128k-instruct-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m phi-3-medium-128k-instruct-q4_k_m.gguf -n 128