Barka-2b-it / README.md
Slim205's picture
Update README.md
5906e92 verified
|
raw
history blame
448 Bytes
metadata
license: mit
datasets:
  - Slim205/total_data_baraka_ift
language:
  - ar
base_model:
  - google/gemma-2-2b-it

The goal of this project is to adapt large language models for the Arabic language. Due to the scarcity of Arabic instruction fine-tuning data, the focus is on creating a high-quality instruction fine-tuning (IFT) dataset. The project aims to finetune models on this dataset and evaluate their performance across various benchmarks.