Edit model card

Model Description

mistral_7b_yo_instruct is a text generation model in Yorùbá.

Intended uses & limitations

How to use


import requests

API_URL = "https://i8nykns7vw253vx3.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
    "Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    "Content-Type": "application/json"
}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

# Prompt content: "Pẹlẹ o. Bawo ni o se wa?" ("Hello. How are you?")	
output = query({
    "inputs": "Pẹlẹ o. Bawo ni o se wa?",
})

# Model response: "O dabo. O jẹ ọjọ ti o dara." ("I am safe. It was a good day.")
print(output)

Eval results

Coming soon

Limitations and bias

This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.

Training data

This model is fine-tuned on 60k+ instruction-following demonstrations built from an aggregation of datasets (AfriQA, XLSum, MENYO-20k), and translations of Alpaca-gpt4).

Use and safety

We emphasize that mistral_7b_yo_instruct is intended only for research purposes and is not ready to be deployed for general use, namely because we have not designed adequate safety measures.

Downloads last month
1
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Dataset used to train seyabde/mistral_7b_yo_instruct