Edit model card

ParsBERT (v2.0)

A Transformer-based Model for Persian Language Understanding

We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the ParsBERT repo for the latest information about previous and current models.

Persian Sentiment [Digikala, SnappFood, DeepSentiPers]

It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: Digikala user comments, SnappFood user comments, and DeepSentiPers in two binary-form and multi-form types.

Digikala

Digikala user comments provided by Open Data Mining Program (ODMP). This dataset contains 62,321 user comments with three labels:

Label #
no_idea 10394
not_recommended 15885
recommended 36042

Download You can download the dataset from here

Results

The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.

Dataset ParsBERT v2 ParsBERT v1 mBERT DeepSentiPers
Digikala User Comments 81.72 81.74* 80.74 -

How to use :hugs:

Task Notebook
Sentiment Analysis Open In Colab

BibTeX entry and citation info

Please cite in publications as the following:

@article{ParsBERT,
    title={ParsBERT: Transformer-based Model for Persian Language Understanding},
    author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
    journal={ArXiv},
    year={2020},
    volume={abs/2005.12515}
}

Questions?

Post a Github issue on the ParsBERT Issues repo.

Downloads last month
714
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.