Papers
arxiv:2303.17564

BloombergGPT: A Large Language Model for Finance

Published on Mar 30, 2023
Authors:
,
,
,
,
,
,

Abstract

The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg's extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology. As a next step, we plan to release training logs (Chronicles) detailing our experience in training BloombergGPT.

Community

appendix C is newly added, no?

Paper author

appendix C is newly added, no?

yes, v2 includes Training Chronicles (Appendix C)

how can I have access to model?

·

AFAIK this is not a public model

Hi @shijie-wu , may I know if your "public financial benchmark" mentioned in Sec. 5.3.1 of the paper is available for public benchmarking? Thank you.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2303.17564 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2303.17564 in a Space README.md to link it from this page.

Collections including this paper 8