This model is a fine-tuned version of teknium/OpenHermes-2.5-Mistral-7B on a private dataset. It achieves the following results on the evaluation set:
- Loss: 1.4546
Model description
Eileithyia-7B is an unaligned, roleplay oriented model created by merging teknium/OpenHermes-2.5-Mistral-7B with a bespoke LORA trained directly on OpenHermes.
Eileithyia, as is the current trend, is named after a Greek goddess; in this case it is the goddess of childbirth and pregnancy.
Training and evaluation data
The private ~400k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
- Medical texts (on pregnancy, reproductive organs, and impregnation). These are formatted so the model, in character as a doctor, answers a patient's question in short to medium form.
- Excerpts from short stories and novellas (erotic, romantic, and platonic) centered around both realistic and fantastic pregnancy. These are sliced into ~2048 token chunks, and these long-form responses are all tied to the command “Enter narrator mode.” in the instructions.
- A selection from PIPPA, using a wide keyword search for related terms then human curated (...the things I’ve seen…). These are converted to Alpaca with “Enter RP mode.” in all the instruction fields.
- ~42k tokens of GPT-4 generated data on pregnancy from various characters’ perspectives, focusing on different responses and stages. Also includes a synopsis for each week in various styles.
- ~18k tokens of GPT-4 generated data on non-maternal role-playing from various characters’ perspectives, focusing on different situations and emotions. Includes many multi-turn conversations.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5629 | 0.75 | 25 | 1.6511 |
1.5253 | 1.5 | 50 | 1.5730 |
1.3363 | 2.25 | 75 | 1.5014 |
1.4017 | 2.99 | 100 | 1.4690 |
1.2677 | 3.74 | 125 | 1.4593 |
1.351 | 4.49 | 150 | 1.4546 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for athirdpath/Eileithyia-7B-LORA
Base model
mistralai/Mistral-7B-v0.1
Finetuned
teknium/OpenHermes-2.5-Mistral-7B