File size: 1,735 Bytes
72ab945 3be0062 8c8f8f7 248ec51 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: mit
datasets:
- AmanMussa/kazakh-instruction-v1
language:
- kk
metrics:
- code_eval
pipeline_tag: text-generation
---
# Model Card for Model ID
LLAMA2 model for Kazakh Language
## Model Details
This model is from Meta LLAMA 2 parameter-efficient fine-tuning with Kazakh Language.
### Model Description
- **Developed by:** Mussa Aman
- **Model type:** Question Answering.
- **Language(s) (NLP):** Kazakh
- **License:** MIT
- **Finetuned from model [optional]:** Meta LLAMA 2
### Model Sources [optional]
### Out-of-Scope Use
There are still some mistakes during the inference process.
## Bias, Risks, and Limitations
The parameter size could be larger, and the dataset need to be optimized.
### Training Data
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f75f7bd04a890f5347d436/dICYqSD1SZOhhbNBJ_XWz.png)
## Evaluation
Run summary:
train/epoch 1.0
train/global_step 3263
train/learning_rate 0.0
train/loss 0.975
train/total_flos 5.1749473473500774e+17
train/train_loss 0.38281
train/train_runtime 13086.8735
train/train_samples_per_second 3.989
train/train_steps_per_second 0.249
## Environment
- **Hardware Type:** NVIDIA A100 40GB
- **Hours used:** 10 hours
- **Cloud Provider:** Google Colab
## Citation [optional]
Citation
BibTeX:
@misc{aman_2023, author = {Aman Mussa}, title = {Self-instruct data pairs for Kazakh language}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1}}, }
APA:
Aman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1
## Model Card Contact
Please contact in email: a_mussa@kbtu.kz |