Update README.md
Browse files
README.md
CHANGED
@@ -26,47 +26,50 @@ In order to validate the annotation, we search for an agreement between raters t
|
|
26 |
|
27 |
## How to use
|
28 |
### For masked-LM model (can be fine-tunned to any down-stream task)
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
|
|
|
|
40 |
|
41 |
### For sentiment classification model (polarity ONLY):
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
\tsentiment_analysis('ืงืคื ืื ืืขืื')
|
60 |
-
\t>>> [[{'label': 'natural', 'score': 0.00047328314394690096},
|
61 |
-
\t>>> {'label': 'possitive', 'score': 0.9994067549705505},
|
62 |
-
\t>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
|
63 |
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
70 |
Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)
|
71 |
|
72 |
|
@@ -80,7 +83,7 @@ our git: https://github.com/avichaychriqui/HeBERT
|
|
80 |
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
|
81 |
```
|
82 |
@article{chriqui2021hebert,
|
83 |
-
title={HeBERT
|
84 |
author={Chriqui, Avihay and Yahav, Inbal},
|
85 |
journal={arXiv preprint arXiv:2102.01909},
|
86 |
year={2021}
|
|
|
26 |
|
27 |
## How to use
|
28 |
### For masked-LM model (can be fine-tunned to any down-stream task)
|
29 |
+
```
|
30 |
+
from transformers import AutoTokenizer, AutoModel
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT")
|
32 |
+
model = AutoModel.from_pretrained("avichr/heBERT")
|
33 |
+
|
34 |
+
from transformers import pipeline
|
35 |
+
fill_mask = pipeline(
|
36 |
+
"fill-mask",
|
37 |
+
model="avichr/heBERT",
|
38 |
+
tokenizer="avichr/heBERT"
|
39 |
+
)
|
40 |
+
fill_mask("ืืงืืจืื ื ืืงืื ืืช [MASK] ืืื ื ืื ื ืฉืืจ ืืืจ.")
|
41 |
+
```
|
42 |
|
43 |
### For sentiment classification model (polarity ONLY):
|
44 |
+
```
|
45 |
+
from transformers import AutoTokenizer, AutoModel, pipeline
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
|
47 |
+
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
|
48 |
+
|
49 |
+
# how to use?
|
50 |
+
sentiment_analysis = pipeline(
|
51 |
+
"sentiment-analysis",
|
52 |
+
model="avichr/heBERT_sentiment_analysis",
|
53 |
+
tokenizer="avichr/heBERT_sentiment_analysis",
|
54 |
+
return_all_scores = True
|
55 |
+
)
|
56 |
+
|
57 |
+
sentiment_analysis('ืื ื ืืชืืื ืื ืืืืื ืืืจืืืช ืฆืืจืืื')
|
58 |
+
>>> [[{'label': 'natural', 'score': 0.9978172183036804},
|
59 |
+
>>> {'label': 'positive', 'score': 0.0014792329166084528},
|
60 |
+
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
|
|
|
|
|
|
|
|
|
61 |
|
62 |
+
sentiment_analysis('ืงืคื ืื ืืขืื')
|
63 |
+
>>> [[{'label': 'natural', 'score': 0.00047328314394690096},
|
64 |
+
>>> {'label': 'possitive', 'score': 0.9994067549705505},
|
65 |
+
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
|
66 |
|
67 |
+
sentiment_analysis('ืื ื ืื ืืืื ืืช ืืขืืื')
|
68 |
+
>>> [[{'label': 'natural', 'score': 9.214012970915064e-05},
|
69 |
+
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
|
70 |
+
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
|
71 |
+
```
|
72 |
+
|
73 |
Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)
|
74 |
|
75 |
|
|
|
83 |
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
|
84 |
```
|
85 |
@article{chriqui2021hebert,
|
86 |
+
title={HeBERT \\\\\\\\& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
|
87 |
author={Chriqui, Avihay and Yahav, Inbal},
|
88 |
journal={arXiv preprint arXiv:2102.01909},
|
89 |
year={2021}
|