Edit model card

The development of large language models (LLMs) has seen rapid progress in recent years. One of the most widely used LLMs is the Generative Pre-trained Transformer (GPT) series, which has been applied in various fields, including the media industry. However, in practical applications, the differences between the media's use cases and the general-purpose applications of LLMs have become increasingly apparent, especially Chinese. As a result, there is a growing need to develop LLM that are specifically tailored to the unique requirements of the media industry.

In this work, we present MediaGPT, a large language model training on variety of media data and addressing the practical needs of Chinese media.

We have designed a diverse set of task instruction types to cater to the specific requirements of the industry. To further validate the effectiveness of our proposed LLM, we have constructed unique datasets that are tailored to the media industry and have also developed verification methods that are specifically designed for generative-type tasks. By doing so, we aim to bridge the gap between the general-purpose LLM and the requirements of the media industry, and to pave the way for more effective and efficient use of LLM in this field.

MediaGPT aims to explore the challenges and opportunities of developing LLM for media applications and to propose potential solutions for addressing these challenges.

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .