Back to all models
Model card Files and versions Use in transformers
text-classification mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Contributed by

Hooshvare Research Lab non-profit
4 team members · 13 models

ParsBERT (v2.0)

A Transformer-based Model for Persian Language Understanding

We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the ParsBERT repo for the latest information about previous and current models.

Persian Text Classification [DigiMag, Persian News]

The task target is labeling texts in a supervised manner in both existing datasets DigiMag and Persian News.


A total of 8,515 articles scraped from Digikala Online Magazine. This dataset includes seven different classes.

  1. Video Games
  2. Shopping Guide
  3. Health Beauty
  4. Science Technology
  5. General
  6. Art Cinema
  7. Books Literature
Label #
Video Games 1967
Shopping Guide 125
Health Beauty 1610
Science Technology 2772
General 120
Art Cinema 1667
Books Literature 254

Download You can download the dataset from here


The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.

Dataset ParsBERT v2 ParsBERT v1 mBERT
Digikala Magazine 93.65* 93.59 90.72

How to use :hugs:

Task Notebook
Text Classification Open In Colab

BibTeX entry and citation info

Please cite in publications as the following:

    title={ParsBERT: Transformer-based Model for Persian Language Understanding},
    author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},


Post a Github issue on the ParsBERT Issues repo.