This repository contains the DISC-FinLLM, version of Baichuan-13B-Chat as the base model.

Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currently deployed demo.

DISC-FinLLM is a large model in the financial field specifically designed to provide users with professional, intelligent, and comprehensive financial consulting services in financial scenarios. It is developed and open sourced by Fudan University Data Intelligence and Social Computing Laboratory (Fudan-DISC). It is a multi-expert smart financial system composed of four modules for different financial scenarios: financial consulting, financial text analysis, financial calculation, and financial knowledge retrieval and question answering. These modules showed clear advantages in four evaluations including financial NLP tasks, human test questions, data analysis and current affairs analysis, proving that DISC-FinLLM can provide strong support for a wide range of financial fields. DISC-FinLLM can help in different application scenarios and can be used to implement different functions:

  • Financial Consultation: This module can start multiple rounds of dialogue with users on financial topics in the Chinese financial context, or explain relevant knowledge of financial majors to users. It is composed of the financial consulting instructions part of the data set.
  • Financial Text Analysis: This module can help users complete NLP tasks such as information extraction, sentiment analysis, text classification, and text generation on financial texts. It is trained by the financial task instructions in the data set.
  • Financial Calculation: This module can help users complete tasks related to mathematical calculations. In addition to basic calculations such as interest rates and growth rates, it also supports statistical analysis and includes the Black-Scholes option pricing model and the EDF expected default probability model. Financial model calculations included. This module is partially trained from the financial computing instructions in the data set.
  • Financial Knowledge Retrieval Q&A: This module can provide users with investment advice, current affairs analysis, and policy interpretation based on financial news, research reports, and related policy documents. It is partially trained from the retrieval-enhanced instructions in the dataset.

Check our HOME for more information.

DISC-Fin-SFT Dataset

DISC-FinLLM is a large financial model based on the high-quality financial data set DISC-Fin-SFT. We construct and fine-tuned the LoRA instruction on the general-domain Chinese large model Baichuan-13B-Chat. DISC-Fin-SFT contains a total of about 250,000 pieces of data, divided into four sub-data sets, which are financial consulting instructions, financial task instructions, financial computing instructions, and retrieval-enhanced instructions.

Dataset Samples Input Length Output Length
Financial Consulting Instructions 63k 26 369
Financial Task Instructions 110k 676 35
Financial Computing Instructions 57k 73 190
Retrieval-enhanced Instructions 20k 1031 521
DISC-Fin-SFT 246k 351 198

Using through hugging face transformers

>>>import torch
>>>>>>from transformers import AutoModelForCausalLM, AutoTokenizer
>>>from transformers.generation.utils import GenerationConfig
>>>tokenizer = AutoTokenizer.from_pretrained("Go4miii/DISC-FinLLM", use_fast=False, trust_remote_code=True)
>>>model = AutoModelForCausalLM.from_pretrained("Go4miii/DISC-FinLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
>>>model.generation_config = GenerationConfig.from_pretrained("Go4miii/DISC-FinLLM")
>>>messages = []
>>>messages.append({"role": "user", "content": "请解释一下什么是银行不良资产?"})
>>>response = model.chat(tokenizer, messages)
>>>print(response)

Disclaimer

DISC-FinLLM has problems and shortcomings that cannot be overcome by current large language models. Although it can provide services in the financial field on many tasks and scenarios, the model should be used for user reference only and cannot replace professional financial analysts and financial experts, we hope that users of DISC-FinLLM will be able to critically evaluate the model. We are not responsible for any problems, risks or adverse consequences arising from the use of DISC-FinLLM.

Citation

If our project has been helpful for your research and work, please kindly cite our work as follows:

@misc{yue2023disclawllm,
    title={DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal Services}, 
    author={Shengbin Yue and Wei Chen and Siyuan Wang and Bingxuan Li and Chenchen Shen and Shujun Liu and Yuxuan Zhou and Yao Xiao and Song Yun and Xuanjing Huang and Zhongyu Wei},
    year={2023},
    eprint={2309.11325},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

License

The use of the source code in this repository complies with the Apache 2.0 License.

Downloads last month
185
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.