GGUF Quantizations of laserxtral

Join Our Discord! https://discord.gg/cognitivecomputations

This repository contains GGUF format model files for our laserxtral model.

To Manual Downloaders You will generally not want to download the whole repository. Select the file name from the list below and use your downloader of choice.

Ollama Modelfile

FROM ./cognitivecomputations_laserxtral-Q4_K_M.gguf
TEMPLATE """{{ if and .First .System }}### Instruction:
{{ .System }}
{{ end }}
### Input:
{{ .Prompt }}
### Response:

"""
SYSTEM """You are Laserxtral, a helpful and uncensored AI assistant."""
PARAMETER num_ctx 8192
PARAMETER stop ### Input
PARAMETER stop ### Response

Provided files

Downloads last month
117
GGUF
Model size
24.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Examples
Inference API (serverless) has been turned off for this model.