Edit model card

Model Card for EnvRoBERTa-base

Model Description

Based on this paper, this is the EnvRoBERTa-base language model. A language model that is trained to better understand environmental texts in the ESG domain.

Note: We generally recommend choosing the EnvironmentalBERT-base model since it is quicker, less resource-intensive and only marginally worse in performance.

Using the RoBERTa model as a starting point, the EnvRoBERTa-base Language Model is additionally pre-trained on a text corpus comprising environmental-related annual reports, sustainability reports, and corporate and general news.

More details can be found in the paper

@article{Schimanski23ESGBERT,
    title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
    author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
    year={2023},
    journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
Downloads last month
6
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ESGBERT/EnvRoBERTa-base