File size: 4,551 Bytes
7e73e38 61be248 7e73e38 61be248 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 |
---
license: llama3
language:
- tr
model-index:
- name: Kocdigital-LLM-8b-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge TR
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 44.03
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag TR
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 46.73
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU TR
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.11
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA TR
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: acc
name: accuracy
value: 48.21
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande TR
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 54.98
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k TR
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.78
name: accuracy
---
<img src="https://huggingface.co/KOCDIGITAL/Kocdigital-LLM-8b-v0.1/resolve/main/icon.jpeg"
alt="KOCDIGITAL LLM" width="420"/>
# Kocdigital-LLM-8b-v0.1
This model is an fine-tuned version of a Llama3 8b Large Language Model (LLM) for Turkish. It was trained on a high quality Turkish instruction sets created from various open-source and internal resources. Turkish Instruction dataset carefully annotated to carry out Turkish instructions in an accurate and organized manner. The training process involved using the QLORA method.
## Model Details
- **Base Model**: Llama3 8B based LLM
- **Training Dataset**: High Quality Turkish instruction sets
- **Training Method**: SFT with QLORA
### QLORA Fine-Tuning Configuration
- `lora_alpha`: 128
- `lora_dropout`: 0
- `r`: 64
- `target_modules`: "q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"
- `bias`: "none"
## Usage Examples
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"KOCDIGITAL/Kocdigital-LLM-8b-v0.1",
max_seq_length=4096)
model = AutoModelForCausalLM.from_pretrained(
"KOCDIGITAL/Kocdigital-LLM-8b-v0.1",
load_in_4bit=True,
)
system = 'Sen Türkçe konuşan genel amaçlı bir asistansın. Her zaman kullanıcının verdiği talimatları doğru, kısa ve güzel bir gramer ile yerine getir.'
template = "{}\n\n###Talimat\n{}\n###Yanıt\n"
content = template.format(system, 'Türkiyenin 3 büyük ilini listeler misin.')
conv = []
conv.append({'role': 'user', 'content': content})
inputs = tokenizer.apply_chat_template(conv,
tokenize=False,
add_generation_prompt=True,
return_tensors="pt")
print(inputs)
inputs = tokenizer([inputs],
return_tensors = "pt",
add_special_tokens=False).to("cuda")
outputs = model.generate(**inputs,
max_new_tokens = 512,
use_cache = True,
do_sample = True,
top_k = 50,
top_p = 0.60,
temperature = 0.3,
repetition_penalty=1.1)
out_text = tokenizer.batch_decode(outputs)[0]
print(out_text)
```
# [Open LLM Turkish Leaderboard v0.2 Evaluation Results]
| Metric | Value |
|---------------------------------|------:|
| Avg. | 49.11 |
| AI2 Reasoning Challenge_tr-v0.2 | 44.03 |
| HellaSwag_tr-v0.2 | 46.73 |
| MMLU_tr-v0.2 | 49.11 |
| TruthfulQA_tr-v0.2 | 48.51 |
| Winogrande _tr-v0.2 | 54.98 |
| GSM8k_tr-v0.2 | 51.78 | |