metadata
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
language:
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- q4_k_m
- 4bit
- sharegpt
- pretaining
- finetuning
- Q5_K_M
- Q8_0
- uss
- Perú
- Lambayeque
- Chiclayo
datasets:
- ussipan/sipangpt
pipeline_tag: text2text-generation
SipánGPT 0.3 Llama 3.2 1B GGUF
- Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú.
- Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru.
Testing the model
- Entrenado con 50000 conversaciones, el modelo puede generar alucinaciones.
- Trained with 50000 conversations, the model can generate hallucinations
Uploaded model
- Developed by: ussipan
- License: apache-2.0
- Finetuned from model : unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
SipánGPT 0.3 Llama 3.2 1B GGUF
Hecho con ❤️ por Jhan Gómez P.