stablelm-zephyr-3B-localmentor-GGUF

Model creator: remyxai
Original model: stablelm-zephyr-3B_localmentor
GGUF quantization: llama.cpp commit fadde6713506d9e6c124f5680ab8c7abebe31837

Description

Fine-tune with low-rank adapters on 25K conversational turns discussing tech/startup from over 800 podcast episodes.

Prompt Template

Following the tokenizer_config.json, the prompt template is Zephyr.

<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
Downloads last month
31
GGUF
Model size
2.8B params
Architecture
stablelm

2-bit

3-bit

4-bit

5-bit

6-bit

16-bit

Inference Examples
Inference API (serverless) does not yet support llama.cpp models for this pipeline type.

Model tree for mgonzs13/stablelm-zephyr-3B-localmentor-GGUF

Quantized
(1)
this model