File size: 2,830 Bytes
4918921 cb093e7 4918921 cb093e7 4918921 cb093e7 4918921 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
library_name: peft
license: cc-by-nc-4.0
base_model: facebook/musicgen-small
tags:
- text-to-audio
- taufiqsyed/salami_cleaned_sampled
- generated_from_trainer
model-index:
- name: salami_truncsplit_legit1_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# salami_truncsplit_legit1_model
This model is a fine-tuned version of [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) on the TAUFIQSYED/SALAMI_CLEANED_SAMPLED - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5173
- Clap: 0.1031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 456
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Clap |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.305 | 0.4813 | 25 | 3.7163 | 0.2508 |
| 6.9842 | 0.9627 | 50 | 4.6081 | 0.0551 |
| 6.9013 | 1.4428 | 75 | 5.6498 | 0.0582 |
| 6.0195 | 1.9242 | 100 | 5.7631 | 0.1450 |
| 6.447 | 2.4043 | 125 | 5.7947 | 0.1187 |
| 6.215 | 2.8857 | 150 | 5.8396 | 0.1164 |
| 6.4075 | 3.3658 | 175 | 5.8361 | 0.1174 |
| 5.797 | 3.8472 | 200 | 5.8150 | 0.1093 |
| 6.347 | 4.3273 | 225 | 5.7935 | 0.1029 |
| 5.3935 | 4.8087 | 250 | 5.7605 | 0.1040 |
| 6.0328 | 5.2888 | 275 | 5.7012 | 0.1006 |
| 6.1476 | 5.7702 | 300 | 5.6703 | 0.0880 |
| 5.9049 | 6.2503 | 325 | 5.8333 | 0.1260 |
| 5.8547 | 6.7316 | 350 | 5.9171 | 0.1415 |
| 5.5151 | 7.2118 | 375 | 6.0307 | 0.1488 |
| 6.0651 | 7.6931 | 400 | 6.0683 | 0.1520 |
| 6.0339 | 8.1733 | 425 | 6.1653 | 0.1496 |
| 5.5607 | 8.6546 | 450 | 6.3588 | 0.1344 |
| 6.0312 | 9.1348 | 475 | 6.4503 | 0.1358 |
| 5.9084 | 9.6161 | 500 | 6.5205 | 0.1081 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |