Llama-3-LongStory / README.md
Blackroot's picture
Update README.md
ba32b03 verified
---
license: llama3
---
8B FP 16 weights
Prompt format is the same as Llama 3: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/
Standard context length of 8192
This model was trained on 100MB of long form stories for 8 epochs. This model was designed to do two tasks, continue a story given a summary of the previous events, and write 3k-8k length stories from a single prompt.
The dataset was constructed from cleaned long form dialogue, restructured, and then summarized with Llama-70B, and temporally stacked so that the summary of the past dialogue begins the next dialogue. Almost all samples were between 7500-8192 tokens long.