Edit model card

MaryGPT Model Card

MaryGPT is a is a text generation model and a fine-tuned version of GPT-J 6B.

This model is fine-tuned exclusively on text from Mary Shelley's 1818 novel "Frankenstein; or, The Modern Prometheus".

This will be used as a base model for AI Artist Yuma Kishi👤’s activity, including art creation and exhibition curation.

image/jpeg

Portrait of Mary Shelley (1840, by Richard Rothwell, in the collection of the National Portrait Gallery)

Training Data Sources

All data was obtained ethically and in compliance with the site's terms and conditions. No copyright texts are used in the training of this model without the permission.

  • GPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI.
  • Frankenstein; or, The Modern Prometheus, Mary Shelley, 1818 (Public domain)

Training procedure

This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.

How to use

This model can be easily loaded using the AutoModelForCausalLM functionality:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("obake2ai/MaryGPT")
model = AutoModelForCausalLM.from_pretrained("obake2ai/MaryGPT")

Developed by

MaryGPT

GPT-J

Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train obake2ai/MaryGPT