--- tags: - generated_from_trainer - llama - text-generation-inference datasets: - mc4 metrics: - accuracy model-index: - name: hausa_finetuned_model results: - task: name: Causal Language Modeling type: text-generation dataset: name: mc4 ha type: mc4 config: ha split: validation args: ha metrics: - name: Accuracy type: accuracy value: 0.6728119950396453 language: - ha pipeline_tag: text-generation --- # Paper and Citation Paper: [Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages](https://arxiv.org/abs/2403.06018) ``` @misc{toukmaji2024fewshot, title={Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages}, author={Christopher Toukmaji}, year={2024}, eprint={2403.06018}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # hausa_finetuned_model This model is a fine-tuned version of [HF_llama](https://huggingface.co/HF_llama) on the mc4 ha dataset. It achieves the following results on the evaluation set: - Loss: 1.4357 - Accuracy: 0.6728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3