Edit model card

Experimenting with pre-training Arabic language + finetuning on instructions using the quantized model mistralai/Mistral-7B-v0.3 from unsloth. First time trying pre-training, expect issues and low quality outputs. The repo contains the merged, quantized model and a GGUF format.

See spaces demo example.

Example usage

llama-cpp-python

from llama_cpp import Llama

inference_prompt = """فيما يلي تعليمات تصف مهمة. اكتب استجابة تكمل الطلب بشكل مناسب.

### تعليمات:
{}

### إجابة:
"""

llm = Llama.from_pretrained(
    repo_id="nazimali/mistral-7b-v0.3-instruct-arabic",
    filename="Q4_K_M.gguf",
)

llm.create_chat_completion(
    messages = [
        {
            "role": "user",
            "content": inference_prompt.format("السلام عليكم كيف حالك؟")
        }
    ]
)

llama.cpp

./llama-cli \
  --hf-repo "nazimali/mistral-7b-v0.3-instruct-arabic" \
  --hf-file Q4_K_M.gguf \
  -p "السلام عليكم كيف حالك؟" \
  --conversation

Training

Pre-training data:

  • wikimedia/wikipedia
  • 20231101.ar
  • Used 6,096 rows, 0.05% of the total data

Finetuning data:

  • FreedomIntelligence/alpaca-gpt4-arabic
  • Used 49,969 rows, 100% of all the data

Finetuning instruction format:

finetune_prompt = """فيما يلي تعليمات تصف مهمة. اكتب استجابة تكمل الطلب بشكل مناسب.

### تعليمات:
{}

### إجابة:
"""
Downloads last month
351
Safetensors
Model size
3.87B params
Tensor type
FP16
·
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nazimali/mistral-7b-v0.3-instruct-arabic

Quantized
(38)
this model

Datasets used to train nazimali/mistral-7b-v0.3-instruct-arabic

Space using nazimali/mistral-7b-v0.3-instruct-arabic 1