Edit model card

magic-the-gathering-flan-t5-xl

A text generation model finetuned on Magic: The Gathering cards up to the Phyrexia: All Will Be One set (February 10, 2023). The base flan-t5-xl was finetuned for 1 "epoch" of card data using the LoRA technique in the PEFT library; this repo contains about 9M hyperparameters for the LoRA adapters.

This model has strong support for out-of-domain inputs, such as generating a card with the name "San Francisco" (all generations will be Lands) or generating cards based on real life people and objects (the flavor text will often take into account the subtext; be ethical!)

Usage

It is very strongly recommended to use this Colab Notebook to generate from the model as it requires some preprocessing and postprocessing both to get the inputs/outputs into the correct format and to extract the generated results, in addition to handling edge cases such as various unicode dashes.

Example input:

Write a Magic: The Gathering card with these characteristics: manaCost : [2][R]|text : Menace
Players can't gain life.
Whenever another creature enters the battlefield, Rampaging Ferocidon deals 1 damage to that creature's controller.

Example output:

name : Rampaging Ferocidon|manaCost : [2][R]|type : Creature — Dinosaur|rarity : rare|text : Menace
 Players can't gain life.
 Whenever another creature enters the battlefield, Rampaging Ferocidon deals 1 damage to that creature's controller.|flavorText : All raptors are aggressive, but ferocidons seem to enjoy their prey's pain.|ptl : 3/3

The card encoding is somewhat unusual due to the constraints of the T5 SentencePiece tokenizer, which apparently does not contain the characters {}~ commonly used to encode Magic cards for training AI.

The tokenizer also assumes a \n token is added to the default T5 tokenizer, as that is necessary for proper detokenization and the model was trained in that way. You can do this by adding tokenizer.add_special_tokens({"additional_special_tokens": [AddedToken("\n")]})

Training Techniques

The model was also trained with two techniques to increase diversity and coherency of the output: hierarchal sampling and subset sampling.

Hierarchal Sampling

There are many, more Creature cards than any other types of cards in Magic, therefore any model trained on a straight corpus will be biased toward Creatures. To work around this, a train-time data processor is used to select cards from the types Creature, Instant, Enchantment, Sorcery, Artifact, Land, Planeswalker, all with equal probability.

Two caveats here: a) we can no longer guarantee the model will ever see all input data and b) it will likely see redundant cards from underepresented groups and thus risks memorization. The latter one can be fixed with...

Subset Sampling

Also during train-time, the model will receive a random subset of the fields in the input card (including receiving zero information, to generate a card from scratch). The approach also models how users would use the model in practice. This technique makes it extremely unlikely for the model to see the same inputs twice, even if it trying to predict the same card multiple times. It also encourages the model to learn from near-infinite combinations of semantic inputs, which works well for T5's encoder-decoder structure.

This technique also creates an intentional data leakage between input/output, which is desirable for this use case to ensure selected inputs are present in the output.

Notes

  • The model is still very bad at generating cards that follow the Magic "color pie" like other similar models
  • The card generations remains coherent even at high temperatures (1.5) which is new.

License

MIT

Downloads last month
0
Unable to determine this model's library. Check the docs .