ruGPT-3.5-13B-GGUF

Description

This repository contains quantized GGUF format model files for ruGPT-3.5-13B.

Prompt template:

{prompt}

Example llama.cpp command

./main -m ruGPT-3.5-13B-Q4_K_M.gguf -c 2048 -n -1 -p 'Стих про программиста может быть таким:'

For other parameters and how to use them, please refer to the llama.cpp documentation

Downloads last month
1,243
GGUF
Model size
13.1B params
Architecture
gpt2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support