Edit model card

mitkox/gemma-2b-dpo-uncensored-4bit

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mitkox/gemma-2b-dpo-uncensored-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
537
Safetensors
Model size
834M params
Tensor type
FP16
·
U32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train mitkox/gemma-2b-dpo-uncensored-4bit

Space using mitkox/gemma-2b-dpo-uncensored-4bit 1