File size: 887 Bytes
f519fe6
 
 
c9488a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v1.5.3

This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset.

### Model description
More information needed

### Intended uses & limitations
More information needed

### Training and evaluation data
More information needed

### Training procedure

### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 3.0
* mixed_precision_training: Native AMP

### Training results

### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0