Edit model card

CentralBankRoBERTa

A Fine-Tuned Large Language Model for Central Bank Communications

CentralBankRoBERTa

CentralBankRoBERTA is a large language model. It combines an economic agent classifier that distinguishes five basic macroeconomic agents with a binary sentiment classifier that identifies the emotional content of sentences in central bank communications.

Overview

The AgentClassifier model is designed to classify the target agent of a given text. It can determine whether the text is adressing households, firms, the financial sector, the government or the central bank itself. This model is based on the RoBERTa architecture and has been fine-tuned on a diverse and extensive dataset to provide accurate predictions.

Intended Use

The AgentClassifier model is intended to be used for the analysis of central bank communications where content categorization based on target agents is essential.

Performance

  • Accuracy: 93%
  • F1 Score: 0.93
  • Precision: 0.93
  • Recall: 0.93

Usage

You can use these models in your own applications by leveraging the Hugging Face Transformers library. Below is a Python code snippet demonstrating how to load and use the AgentClassifier model:

from transformers import pipeline

# Load the AgentClassifier model
agent_classifier = pipeline("text-classification", model="Moritz-Pfeifer/CentralBankRoBERTa-agent-classifier")

# Perform agent classification
agent_result = agent_classifier("We used our liquidity tools to make funding available to banks that might need it.")
print("Agent Classification:", agent_result[0]['label'])
Please cite this model as Pfeifer, M. and Marohl, V.P. (2023) "CentralBankRoBERTa: A Fine-Tuned Large Language Model for Central Bank Communications", Journal of Finance and Data Science https://doi.org/10.1016/j.jfds.2023.100114
Moritz Pfeifer
Institute for Economic Policy, University of Leipzig
04109 Leipzig, Germany
pfeifer@wifa.uni-leipzig.de
Vincent P. Marohl
Department of Mathematics, Columbia University
New York NY 10027, USA
vincent.marohl@columbia.edu

BibTeX entry and citation info

@article{Pfeifer2023,
  title = {CentralBankRoBERTa: A fine-tuned large language model for central bank communications},
  journal = {The Journal of Finance and Data Science},
  volume = {9},
  pages = {100114},
  year = {2023},
  issn = {2405-9188},
  doi = {https://doi.org/10.1016/j.jfds.2023.100114},
  url = {https://www.sciencedirect.com/science/article/pii/S2405918823000302},
  author = {Moritz Pfeifer and Vincent P. Marohl},
}
Downloads last month
2,884

Dataset used to train Moritz-Pfeifer/CentralBankRoBERTa-agent-classifier