LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Abstract
Studying how people interact with large language models (LLMs) in real-world scenarios is increasingly important due to their widespread use in various applications. In this paper, we introduce LMSYS-Chat-1M, a large-scale dataset containing one million real-world conversations with 25 state-of-the-art LLMs. This dataset is collected from 210K unique IP addresses in the wild on our Vicuna demo and Chatbot Arena website. We offer an overview of the dataset's content, including its curation process, basic statistics, and topic distribution, highlighting its diversity, originality, and scale. We demonstrate its versatility through four use cases: developing content moderation models that perform similarly to GPT-4, building a safety benchmark, training instruction-following models that perform similarly to Vicuna, and creating challenging benchmark questions. We believe that this dataset will serve as a valuable resource for understanding and advancing LLM capabilities. The dataset is publicly available at https://huggingface.co/datasets/lmsys/lmsys-chat-1m.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Large Language Model as a User Simulator (2023)
- MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback (2023)
- Efficient Finetuning Large Language Models For Vietnamese Chatbot (2023)
- Better Zero-Shot Reasoning with Role-Play Prompting (2023)
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
tell me more about this paper
Unveiling LMSYS-Chat-1M: A Million Real-World LLM Conversations Explored
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper