Edit model card
Pofi

Pofi is a fine-tuned version of "decapoda-research/llama-7b-hf," designed to act as an assistant capable of performing various tasks, such as:

Utilities
Setting alarms
Connecting to the web
Sending files
Sending messages
Saving strings of characters
Opening applications
Creating files
Manipulating the system

The training data was obtained manually through different prompts in ChatGPT, including examples created by me. It was fine-tuned using over 7,000 examples of "User" to "AI" commands. The training process was carried out in Google Colab, specifically in the "🦙🎛️ LLaMA-LoRA Tuner" notebook. The training lasted for approximately 5 hours over 10 epochs.

Once LoRa was obtained, the "export_hf_checkpoint.py" script from the "tloen/alpaca-lora" repository was used to merge LoRa with the base model. This was done to quantize the model to "ggml-q4," enabling its use on a graphics card-free computer (like mine). Quantization was performed using the "ggerganov/llama.cpp" repository and the "convert.py" script.

To utilize this model, you can employ "oobabooga/text-generation-webui," a user-friendly interface, or the interface I am developing for this project, "OscarMes/Pofi-Assistant."

This project was created for the purpose of studying and learning about language models. All rights are reserved according to the license included in "decapoda-research/llama-7b-hf." Please refer to the included LICENSE file in that repository.


license: other

Downloads last month
0
Inference Examples
Inference API (serverless) has been turned off for this model.