|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
## DAVinCI-42dot_LLM-PLM-1.3B-v1.12 |
|
|
|
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset. |
|
|
|
### Model description |
|
More information needed |
|
|
|
### Intended uses & limitations |
|
More information needed |
|
|
|
### Training and evaluation data |
|
More information needed |
|
|
|
### Training procedure |
|
|
|
### Training hyperparameters |
|
The following hyperparameters were used during training: |
|
* learning_rate: 2e-05 |
|
* train_batch_size: 24 |
|
* eval_batch_size: 8 |
|
* seed: 42 |
|
* gradient_accumulation_steps: 4 |
|
* total_train_batch_size: 96 |
|
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
* lr_scheduler_type: linear |
|
* num_epochs: 1.0 |
|
* mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
### Framework versions |
|
* Transformers 4.36.2 |
|
* Pytorch 2.1.2+cu121 |
|
* Datasets 2.0.0 |
|
* Tokenizers 0.15.0 |
|
|