Opani Coder 1B (16-bit)
A fine-tuned Llama 3.2 1B model for coding assistance in Twi (Akan language), helping Twi speakers learn programming in their native language.
Model Details
- Base Model: Llama 3.2 1B
- Precision: 16-bit full model
- Language: Twi with code examples
- Fine-tuning: Full model fine-tuning
- Developed by: michsethowusu
Installation
pip install torch transformers
Usage
You can test the model using this HF Space - https://huggingface.co/spaces/michsethowusu/Opani-Coder-DEMO
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
import torch
# Load model
model_id = "michsethowusu/opani-coder_1b-merged-16bit"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
# Prepare message
messages = [
{"role": "user", "content": "Kyerɛkyerɛ nea for loop yɛ"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
# Generate response
streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
**tokenizer(text, return_tensors="pt").to(model.device),
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
streamer=streamer,
)
Use Cases
- Explaining programming concepts in Twi
- Code generation with Twi commentary
- Debugging assistance in Twi
- Translating coding tutorials to Twi
Limitations
- Optimized for Twi language programming interactions
- May mix English technical terms
- Verify generated code before production use
Citation
@misc{opani-coder_1b_2024,
author = {michsethowusu},
title = {Opani Coder 1B: Fine-tuned Llama 3.2 1B for Twi Coding Assistance},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/michsethowusu/opani-coder_1b-merged-16bit}
}
Contact
- HuggingFace: @michsethowusu
- Downloads last month
- 37
Model tree for michsethowusu/opani-coder_1b-merged-16bit
Base model
meta-llama/Llama-3.2-1B-Instruct