Papers
arxiv:2309.16609

Qwen Technical Report

Published on Sep 28, 2023
ยท Featured in Daily Papers on Sep 29, 2023
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.

Community

This looks like it's just blown away the previous open source SOTA evaluations, why doesn't it have more up votes... Am I missing something?

Does the Qwen-7b support flash_attention?

Sign up or log in to comment

Models citing this paper 36

Browse 36 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.16609 in a dataset README.md to link it from this page.

Spaces citing this paper 192

Collections including this paper 27