Edit model card

Model Card for Model ID

This is quantized 4 bit, 5 bit, 8 bit and fp16 model versions of the bling-phi-2-v0 by llmware. Quantized models are in gguf format and done using llama.cpp

Model Description

For more info about prompt template refer the original model repo.

Original Model Details:

Creator: llmware

Link: https://huggingface.co/llmware/bling-phi-2-v0

Downloads last month
26
GGUF
Model size
2.78B params
Architecture
phi2

4-bit

5-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.