go-inoue commited on
Commit
a5482ef
1 Parent(s): 6193da3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -9,7 +9,9 @@ widget:
9
  ## Model description
10
  **CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
11
  For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
12
- Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
 
 
13
  ## Intended uses
14
  You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
15
  #### How to use
@@ -25,13 +27,15 @@ You can also use the SA model directly with a transformers pipeline:
25
  ```python
26
  >>> from transformers import pipeline
27
  e
28
- >>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
29
  >>> sentences = ['أنا بخير', 'أنا لست بخير']
30
  >>> sa(sentences)
31
  [{'label': 'positive', 'score': 0.9616648554801941},
32
  {'label': 'negative', 'score': 0.9779177904129028}]
33
  ```
34
- *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
 
 
35
  ## Citation
36
  ```bibtex
37
  @inproceedings{inoue-etal-2021-interplay,
9
  ## Model description
10
  **CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
11
  For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
12
+ Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
13
+ * Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
14
+
15
  ## Intended uses
16
  You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
17
  #### How to use
27
  ```python
28
  >>> from transformers import pipeline
29
  e
30
+ >>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
31
  >>> sentences = ['أنا بخير', 'أنا لست بخير']
32
  >>> sa(sentences)
33
  [{'label': 'positive', 'score': 0.9616648554801941},
34
  {'label': 'negative', 'score': 0.9779177904129028}]
35
  ```
36
+ *Note*: to download our models, you would need `transformers>=3.5.0`.
37
+ Otherwise, you could download the models manually.
38
+
39
  ## Citation
40
  ```bibtex
41
  @inproceedings{inoue-etal-2021-interplay,