File size: 1,292 Bytes
7e48234 130b77e 7e48234 9970ed7 7e48234 9970ed7 7e48234 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
datasets:
- SubMaroon/DTF_Comments_Responses_Counts
language:
- ru
base_model:
- unsloth/Qwen2.5-7B
pipeline_tag: text-generation
---
A continued pretrained version of unsloth/Qwen2.5-7B model using unsloth's low rank adaptation on a dataset of [DTF](dtf.ru) posts. The adapter is already merged with the model.
For pretraining, posts from [SubMaroon/DTF_comments_Responses_Counts](https://huggingface.co/datasets/SubMaroon/DTF_Comments_Responses_Counts) were selected, deduplicated by simple `df.unique` and filtered by length of 1000 < x < 128000 tokens.
LoRA hyperparameters:
```
r=32
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
]
lora_alpha=16
lora_dropout=0
bias="none"
use_gradient_checkpointing='unsloth'
use_rslora=True
random_state=42
```
Training hyperparameters:
```
num_train_epochs=2
train_batch_size=8
gradient_accumulation_steps=16
gradient_checkpointing=False
optim="adamw_8bit"
weight_decay=4e-2
bf16=True
learning_rate=5e-5
lr_scheduler_type="cosine"
packing=True,
seed=42
```
Training time:
- NVidia Tesla A100 80GB: ~8.5 hours
- NVidia RTX 3090ti: ~33.5 hours
[Wandb](https://wandb.ai/a_okshus/DTF_comments/runs/fr5hfq6g?nw=nwusera_okshus)
[GitHub: TODO]() |