BERTweet-FA / README.md
arm-on's picture
Update README.md
f59a330
metadata
license: apache-2.0
language: fa
widget:
  - text: این بود [MASK] های ما؟
  - text: داداچ داری [MASK] میزنی
  - text: به علی [MASK] میگفتن جادوگر
  - text: آخه محسن [MASK] هم شد خواننده؟
  - text: پسر عجب [MASK] زد
tags:
  - BERTweet
model-index:
  - name: BERTweet-FA
    results: []

BERTweet-FA: A pre-trained language model for Persian (a.k.a Farsi) Tweets

BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The model has been trained on the data only for 1 epoch (322906 steps), and yet it has the ability to recognize the meaning of most of the conversational sentences used in Farsi. Note that the architecture of this model follows the original BERT [Devlin et al.].

How to use the Model

from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained('arm-on/BERTweet-FA')
tokenizer = BertTokenizer.from_pretrained('arm-on/BERTweet-FA')
fill_sentence = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_sentence('اینجا جمله مورد نظر خود را بنویسید و کلمه موردنظر را [MASK] کنید')

The Training Data

The first version of the model was trained on the "Large Scale Colloquial Persian Dataset" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.

Evaluation

Training Loss Epoch Step
0.0036 1.0 322906

Contributors