SubmissionModel / README.md
anonymousparrot01's picture
initial upload
6c12eac
metadata
language: en
tags:
  - bert
  - business
  - finance
license: cc-by-4.0
datasets:
  - CompanyWeb
  - MD&A
  - S2ORC

BusinessBERT

An industry-sensitive language model for business applications pretrained on business communication corpora. The model incorporates industry classification (IC) as a pretraining objective besides masked language modeling (MLM).

It was introduced in this paper and released in this repository.

Model description

We introduce BusinessBERT, an industry-sensitive language model for business applications. The advantage of the model is the training approach focused on incorporating industry information relevant for business related natural language processing (NLP) tasks. We compile three large-scale textual corpora consisting of annual disclosures, company website content and scientific literature representing business communication. In total, the corpora include 2.23 billion token. BusinessBERT builds upon the bidirectional encoder representations from transformer architecture (BERT) and embeds industry information during pretraining in two ways: (1) The business communication corpora contain a variety of industry-specific terminology; (2) We employ industry classification (IC) as an additional pretraining objective for text documents originating from companies.

Intended uses & limitations

The model is intended to be fine-tuned on business related NLP tasks, i.e. sequence classification, named entity recognition, sentiment analysis or question answering.

How to use

[PLACEHOLDER]

Limitations and bias

[PLACEHOLDER]

Training data

Evaluation results

[PLACEHOLDER]

BibTeX entry and citation info

@misc{title_year,
      title={TITLE},
      author={AUTHORS},
      year={YEAR},
}