File size: 804 Bytes
74f237d
d0cdbbe
8d59683
 
 
 
 
 
13da1f7
74f237d
13da1f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: cc-by-nc-4.0
language: 
  - ar
tags:
  - Arabic BERT
  - Poetry
  - Masked Langauge Model

---


**AraPoemBERT** is the first pre-trained large language model focused exclusively on Arabic poetry. The dataset used in pretraining the model contains more than 2 million verses. The code files along with the results are available on [repo](https://github.com/FaisalQarah/araPoemBERT).



# BibTex

If you use SaudiBERT model in your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (citation details to be updated):
```bibtex
@article{qarah2024arapoembert,
  title={AraPoemBERT: A Pretrained Language Model for Arabic Poetry Analysis},
  author={Qarah, Faisal},
  journal={arXiv preprint arXiv:2403.12392},
  year={2024}
}


```