Edit model card

Model Card for Carpincho-13b

This is Carpincho-13B an Instruction-tuned LLM based on LLama-13B. It is trained to answer in colloquial spanish Argentine language. It's based on LLama-13b (https://huggingface.co/decapoda-research/llama-13b-hf).

Model Details

The model is provided in two formats: A low rank adaptation model (LoRA) suitable to apply directly to LLama-13B-HF, and a complete merged model quantized to 4bits that only requires 8GB of VRAM. Both models can be used directly in software like text-generation-webui https://github.com/oobabooga/text-generation-webui. Additionally, a test chatbot based on this neural network is running on the twitter account http://twitter.com/arggpt

Model Description

Model Sources [optional]

Uses

This is a generic LLM chatbot that can be used to interact directly with humans.

Bias, Risks, and Limitations

This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

How to Get Started with the Model

The easiest way is to download the text-generation-webui application (https://github.com/oobabooga/text-generation-webui) and place the model inside the 'models' directory. Then launch the web interface and run the model as a regular LLama-13B model. LoRA model don't require additional installation, but 4-bit mode (only uses 25% GPU VRAM) needs additional installation steps detailed at https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md

Model Card Contact

Contact the creator at @ortegaalfredo on twitter/github

Downloads last month
7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.