base_model: mistralai/Mistral-7B-v0.1
Project-Frankenstein
Model Overview
Model Name: Project-Frankenstein
Model Type: Text Generation
Base Model: Mistral-7B-v0.1
Fine-tuned by: Jack Mander
Description:
Project-Frankenstein is a text generation model fine-tuned to generate fan fiction in the style of Mary Shelley's "Frankenstein." It uses the complete text of "Frankenstein" as its training data to produce coherent and stylistically consistent fan fiction.
Model Details
Model Architecture:
- Base Model: Mistral-7B-v0.1
- Tokenizer: AutoTokenizer from Hugging Face Transformers
- Training Framework: Transformers, Peft, and Accelerate libraries
Training Data:
- The model was fine-tuned using the text of "Frankenstein" by Mary Shelley.
- The text was split into training and test datasets using an 80/20 split.
- Converted Pandas DataFrames to Hugging Face Datasets.
Hyperparameters:
- Learning Rate: 2e-5
- Epochs: 2
- Optimizer: Paged AdamW 8-bit
Training Procedure
The model was trained on a Tesla T4 GPU using Google Colab. The training involved the following steps:
- Data Preparation:
- The text of "Frankenstein" was preprocessed and split into training and test datasets.
- Model Training:
- The model was trained for 2 epochs with a learning rate of 2e-5 using the Paged AdamW 8-bit optimizer.
Example Generations
Base Model Generation: I'm afraid I've created a 2000-level problem with a 100-level solution.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2
Fine-tuned Model Generation: I'm afraid I've created a monster, one which will be the means of my own destruction. What shall I do? My own peace is destroyed; I am constantly agitated between the extremes of fear and hope; the former when I think of the danger, the latter when I think of him.
“I have been occupied in making a man, and he is perfect. I have given him the utmost extent of my own faculties, and more. He
Limitations and Biases:
- This model is trained specifically on the text of "Frankenstein" and may not generalize well to other texts or styles.
- Potential biases present in the original text of "Frankenstein" will be reflected in the generated outputs.
Acknowledgments:
This project was completed as a fine-tuning practice project. Special thanks to the Hugging Face community for their tools and resources.
Usage
To use this model, follow these steps to log in to Hugging Face, get access to the gated repo, and load the model:
from transformers import AutoModelForCausalLM, AutoTokenizer
from huggingface_hub import login
# Log in to Hugging Face
login("your-hugging-face-token")
# Ensure you have access to the gated repo
# Visit https://huggingface.co/mistralai/Mistral-7B-v0.1 to request access if you haven't already
tokenizer = AutoTokenizer.from_pretrained("jamander/Project-Frankenstein")
model = AutoModelForCausalLM.from_pretrained("jamander/Project-Frankenstein")
input_text = "I am afraid I have created a "
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
**Limitations and Biases:**
- This model is trained specifically on the text of "Frankenstein" and may not generalize well to other texts or styles.
- Potential biases present in the original text of "Frankenstein" will be reflected in the generated outputs.
#**Acknowledgments:**
This project was completed as a fine-tuning practice project. Special thanks to the Hugging Face community for their tools and resources.
- Downloads last month
- 2