Modifying Large Language Model Post-Training for Diverse Creative Writing
Abstract
As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing generation, we investigate post-training approaches to promote both output diversity and quality. Our core idea is to include deviation -- the degree of difference between a training sample and all other samples with the same prompt -- in the training objective to facilitate learning from rare high-quality instances. By adopting our approach to direct preference optimization (DPO) and odds ratio preference optimization (ORPO), we demonstrate that we can promote the output diversity of trained models while minimally decreasing quality. Our best model with 8B parameters could achieve on-par diversity as a human-created dataset while having output quality similar to the best instruction-tuned models we examined, GPT-4o and DeepSeek-R1. We further validate our approaches with a human evaluation, an ablation, and a comparison to an existing diversification approach, DivPO.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Got Compute, but No Data: Lessons From Post-training a Finnish LLM (2025)
- Efficient Response Generation Method Selection for Fine-Tuning Large Language Models (2025)
- DeepThink: Aligning Language Models with Domain-Specific User Intents (2025)
- WritingBench: A Comprehensive Benchmark for Generative Writing (2025)
- Controlled Diversity: Length-optimized Natural Language Generation (2025)
- SEFL: Harnessing Large Language Model Agents to Improve Educational Feedback Systems (2025)
- Post-training an LLM for RAG? Train on Self-Generated Demonstrations (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper