codewithkyrian
commited on
Commit
โข
b549793
1
Parent(s):
e07db4c
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: Transformers PHP
|
3 |
+
tags:
|
4 |
+
- onnx
|
5 |
+
---
|
6 |
+
|
7 |
+
https://huggingface.co/avichr/heBERT with ONNX weights to be compatible with Transformers PHP
|
8 |
+
|
9 |
+
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition
|
10 |
+
HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br>
|
11 |
+
|
12 |
+
### HeBert was trained on three dataset:
|
13 |
+
1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences.
|
14 |
+
2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences
|
15 |
+
3. Emotion UGC data that was collected for the purpose of this study. (described below)
|
16 |
+
We evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks.
|
17 |
+
|
18 |
+
### Emotion UGC Data Description
|
19 |
+
Our User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences.
|
20 |
+
4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br>
|
21 |
+
In order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).
|
22 |
+
## How to use
|
23 |
+
### For masked-LM model (can be fine-tunned to any down-stream task)
|
24 |
+
```
|
25 |
+
from transformers import AutoTokenizer, AutoModel
|
26 |
+
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT")
|
27 |
+
model = AutoModel.from_pretrained("avichr/heBERT")
|
28 |
+
|
29 |
+
from transformers import pipeline
|
30 |
+
fill_mask = pipeline(
|
31 |
+
"fill-mask",
|
32 |
+
model="avichr/heBERT",
|
33 |
+
tokenizer="avichr/heBERT"
|
34 |
+
)
|
35 |
+
fill_mask("ืืงืืจืื ื ืืงืื ืืช [MASK] ืืื ื ืื ื ืฉืืจ ืืืจ.")
|
36 |
+
```
|
37 |
+
|
38 |
+
### For sentiment classification model (polarity ONLY):
|
39 |
+
```
|
40 |
+
from transformers import AutoTokenizer, AutoModel, pipeline
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
|
42 |
+
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
|
43 |
+
|
44 |
+
# how to use?
|
45 |
+
sentiment_analysis = pipeline(
|
46 |
+
"sentiment-analysis",
|
47 |
+
model="avichr/heBERT_sentiment_analysis",
|
48 |
+
tokenizer="avichr/heBERT_sentiment_analysis",
|
49 |
+
return_all_scores = True
|
50 |
+
)
|
51 |
+
|
52 |
+
>>> sentiment_analysis('ืื ื ืืชืืื ืื ืืืืื ืืืจืืืช ืฆืืจืืื')
|
53 |
+
[[{'label': 'natural', 'score': 0.9978172183036804},
|
54 |
+
{'label': 'positive', 'score': 0.0014792329166084528},
|
55 |
+
{'label': 'negative', 'score': 0.0007035882445052266}]]
|
56 |
+
|
57 |
+
>>> sentiment_analysis('ืงืคื ืื ืืขืื')
|
58 |
+
[[{'label': 'natural', 'score': 0.00047328314394690096},
|
59 |
+
{'label': 'possitive', 'score': 0.9994067549705505},
|
60 |
+
{'label': 'negetive', 'score': 0.00011996887042187154}]]
|
61 |
+
|
62 |
+
>>> sentiment_analysis('ืื ื ืื ืืืื ืืช ืืขืืื')
|
63 |
+
[[{'label': 'natural', 'score': 9.214012970915064e-05},
|
64 |
+
{'label': 'possitive', 'score': 8.876807987689972e-05},
|
65 |
+
{'label': 'negetive', 'score': 0.9998190999031067}]]
|
66 |
+
```
|
67 |
+
Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)
|
68 |
+
|
69 |
+
### For NER model:
|
70 |
+
|
71 |
+
```
|
72 |
+
from transformers import pipeline
|
73 |
+
|
74 |
+
# how to use?
|
75 |
+
NER = pipeline(
|
76 |
+
"token-classification",
|
77 |
+
model="avichr/heBERT_NER",
|
78 |
+
tokenizer="avichr/heBERT_NER",
|
79 |
+
)
|
80 |
+
NER('ืืืื ืืืื ืืืื ืืืจืกืืื ืืขืืจืืช ืฉืืืจืืฉืืื')
|
81 |
+
```
|
82 |
+
|
83 |
+
|
84 |
+
## Stay tuned!
|
85 |
+
We are still working on our model and will edit this page as we progress.<br>
|
86 |
+
Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br>
|
87 |
+
our git: https://github.com/avichaychriqui/HeBERT
|
88 |
+
|
89 |
+
|
90 |
+
## If you use this model please cite us as :
|
91 |
+
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
|
92 |
+
```
|
93 |
+
@article{chriqui2021hebert,
|
94 |
+
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
|
95 |
+
author={Chriqui, Avihay and Yahav, Inbal},
|
96 |
+
journal={INFORMS Journal on Data Science},
|
97 |
+
year={2022}
|
98 |
+
}
|
99 |
+
```
|
100 |
+
|
101 |
+
|
102 |
+
---
|
103 |
+
|
104 |
+
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [๐ค Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|