Model Card for DistilGutenMystery

Fine-tuned version of DistilGPT2 on a corpus of 20 various mystery/detective style novels collected from Project Gutenberg.

Table of Contents

Model Details

Model Description

Fine-tuned version of DistilGPT2 on a corpus of 20 various mystery/detective style novels collected from Project Gutenberg.

  • Developed by: More information needed
  • Shared by [Optional]: More information needed
  • Model type: Language model
  • Language(s) (NLP): en
  • License: apache-2.0
  • Parent Model: More information needed
  • Resources for more information: More information needed

Uses

Direct Use

Aiding story writing and brainstorming for novels. Possible use for generating nonsensical and absurd texts.

Downstream Use

Out-of-Scope Use

This model does not distinguish fact from fiction, therefore the model is not intended to support use-cases that require the generated text to be true.

Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. There's the possibility of out-dated language being used that might reflect certain bias' and if the model is ever to be deployed it is highly recommended to do further bias related fine-tuning and other related testing.

Recommendations

Training Details

Training Data

Corpus was created from 20 books about mystery and detective stories collected from project Gutenberg (gutenberg.org/ on 2/20/23) for the purpose of aiding in story writing for mystery/detective novels. In total there are 1,048,519 tokens in the entire corpus collected from the following 20 various mystery/detective style books: The Extraordinary Adventures of Arsène Lupin, Gentleman-Burglar, by Maurice Leblanc: 55,726 tokens The Crimson Cryptogram A Detective Story by Fergus Hume: 60,179 tokens The House of a Thousand Candles by Meredith Nicholson: 83,133 tokens Tracked by Wireless by William Le Queux: 76,236 tokens Behind the Green Door, by Mildred A. Wirt: 43,705 tokens The house on the cliff by Franklin W. Dixon: 41,721 tokens Tales of Secret Egypt by Sax Rohmer: 76,892 tokens The Haunted Bookshop by Christopher Morley: 63,269 tokens Whispering Walls, by Mildred A. Wirt: 42,388 tokens The Clock Struck One by Fergus Hume: 61,614 tokens McAllister and His Double by Arthur Cheney Train: 65,583 tokens The Three Eyes by Maurice Leblanc: 62,887 tokens Ghost Beyond the Gate by Mildred A. Wirt: 41,172 tokens The Motor Rangers Through the Sierras by John Henry Goldfrap: 49,285 tokens Peggy Finds the Theatre by Virginia Hughes: 41,575 tokens The Puzzle in the Pond by Margaret Sutton: 36,485 tokens Jack the runaway; or, On the road with a circus by Frank V. Webster: 42,814 tokens The Camp Fire Girls Solve a Mystery; Or, The Christmas Adventure at Carver House: 50,286 tokens Danger at the Drawbridge by Mildred A. Wirt: 42,075 tokens Voice from the Cave by Mildred A. Wirt: 39,064 tokens

Training Procedure

Preprocessing

Each story was downloaded from Project Gutenberg, where the “Gutenberg” specific texts were removed from the document, along with chapter headings. Then stories were combined into a single text document that was then loaded as a dataset, sampled by paragraph. Stated hyper-parameters for training: num_train+epochs=30, per_device+train_batch_size=32, and all other trainer values were left as default values. Additionally, the tokenizer was set with padding_side=’left’, and the model’s pad_token_id was set to the tokenizer.eos_token_id, and num_labels=0.

Evaluation

Testing Data, Factors & Metrics

Testing Data

More information needed

Factors

More information needed

Metrics

The fine-tuned model was evaluated using the sacrebleu metric.

Results

score: 0.2458566059729917 counts: [56008, 5821, 552, 181] totals: [1014368, 985984, 957908, 930569] precisions: [5.52146755418152, 0.5903746916785668, 0.057625575733786544, 0.019450465252979627] bp: 1.0 sys_len: 1014368 ref_len: 212162

Model Card Authors [optional]

Hugging Face, Jack Quigley

Model Card Contact

More information needed

How to Get Started with the Model

Use the code below to get started with the model.

Click to expand
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('jquigl/DistilGutenMystery')    
model = AutoModelForCausalLM.from_pretrained('jquigl/DistilGutenMystery')

generator = pipeline('text-generation', model = model, tokenizer = tokenizer)
gen = generator("It was a strange ending to a", min_length = 100, max_length = 150, num_return_sequences=3)
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support