Visualize in Weights & Biases

open-hermes-erotic-story-finetune

This model is a fine-tuned version of teknium/OpenHermes-2.5-Mistral-7B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1465

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5
  • training_steps: 1500

Training results

Training Loss Epoch Step Validation Loss
1.1158 0.1056 50 0.2470
0.2078 0.2112 100 0.1934
0.1866 0.3168 150 0.1867
0.1804 0.4224 200 0.1719
0.158 0.5280 250 0.1515
0.15 0.6336 300 0.1507
0.1489 0.7392 350 0.1499
0.1464 0.8448 400 0.1494
0.1485 0.9504 450 0.1487
0.1485 1.0560 500 0.1485
0.1459 1.1616 550 0.1484
0.1444 1.2672 600 0.1481
0.1469 1.3728 650 0.1477
0.148 1.4784 700 0.1477
0.1438 1.5839 750 0.1475
0.1454 1.6895 800 0.1472
0.149 1.7951 850 0.1471
0.1395 1.9007 900 0.1469
0.1416 2.0063 950 0.1468
0.1443 2.1119 1000 0.1468
0.1445 2.2175 1050 0.1467
0.143 2.3231 1100 0.1468
0.1416 2.4287 1150 0.1468
0.1418 2.5343 1200 0.1466
0.1384 2.6399 1250 0.1466
0.1382 2.7455 1300 0.1466
0.1423 2.8511 1350 0.1466
0.1414 2.9567 1400 0.1465
0.1398 3.0623 1450 0.1465
0.136 3.1679 1500 0.1465

Framework versions

  • PEFT 0.11.2.dev0
  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
118
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Model tree for vaitech/open-hermes-erotic-story-finetune

Adapter
(100)
this model
Merges
7 models