This huggingface page hosts a low-rank adapter designed specifically for the fine-tuning of the bloom-7b model on Arabic instructions. Additional information regarding the datasets will be made available soon. The model was trained using the codebase found in the repository: https://github.com/tloen/alpaca-lora, with certain modifications to adjust the requirements of bloom-7b.