Edit model card

CentralBankRoBERTa

A Fine-Tuned Large Language Model for Central Bank Communications

CentralBankRoBERTa

CentralBankRoBERTA is a large language model. It combines an economic agent classifier that distinguishes five basic macroeconomic agents with a binary sentiment classifier that identifies the emotional content of sentences in central bank communications.

Overview

The SentimentClassifier model is designed to detect whether a given sentence is positive or negative for either households, firms, the financial sector or the government. This model is based on the RoBERTa architecture and has been fine-tuned on a diverse and extensive dataset to provide accurate predictions.

Intended Use

The AgentClassifier model is intended to be used for the analysis of central bank communications where sentiment analysis is essential.

Performance

  • Accuracy: 88%
  • F1 Score: 0.88
  • Precision: 0.88
  • Recall: 0.88

Usage

You can use these models in your own applications by leveraging the Hugging Face Transformers library. Below is a Python code snippet demonstrating how to load and use the AgentClassifier model:

from transformers import pipeline

# Load the SentimentClassifier model
agent_classifier = pipeline("text-classification", model="Moritz-Pfeifer/CentralBankRoBERTa-sentiment-classifier")

# Perform sentiment analysis
sentinement_result = agent_classifier("The early effects of our policy tightening are also becoming visible, especially in sectors like manufacturing and construction that are more sensitive to interest rate changes.")
print("Sentiment:", sentinement_result[0]['label'])
Please cite this model as Pfeifer, M. and Marohl, V.P. (2023) "CentralBankRoBERTa: A Fine-Tuned Large Language Model for Central Bank Communications". Journal of Finance and Data Science (forthcoming) https://doi.org/10.1016/j.jfds.2023.100114
Moritz Pfeifer
Institute for Economic Policy, University of Leipzig
04109 Leipzig, Germany
pfeifer@wifa.uni-leipzig.de
Vincent P. Marohl
Department of Mathematics, Columbia University
New York NY 10027, USA
vincent.marohl@columbia.edu

BibTeX entry and citation info

@article{Pfeifer2023,
  title = {CentralBankRoBERTa: A fine-tuned large language model for central bank communications},
  journal = {The Journal of Finance and Data Science},
  volume = {9},
  pages = {100114},
  year = {2023},
  issn = {2405-9188},
  doi = {https://doi.org/10.1016/j.jfds.2023.100114},
  url = {https://www.sciencedirect.com/science/article/pii/S2405918823000302},
  author = {Moritz Pfeifer and Vincent P. Marohl},
}
Downloads last month
2,070

Dataset used to train Moritz-Pfeifer/CentralBankRoBERTa-sentiment-classifier