julien-c HF staff commited on
Commit
f611da4
1 Parent(s): 64c3d4a

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/activebus/BERT-XD_Review/README.md

Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ReviewBERT
2
+
3
+ BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
4
+ Please visit https://github.com/howardhsu/BERT-for-RRC-ABSA for details.
5
+
6
+ `BERT-XD_Review` is a cross-domain (beyond just `laptop` and `restaurant`) language model, where each example is from a single product / restaurant with the same rating, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
7
+ The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
8
+
9
+ ## Model Description
10
+
11
+ The original model is from `BERT-base-uncased`.
12
+ Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
13
+
14
+
15
+ ## Instructions
16
+ Loading the post-trained weights are as simple as, e.g.,
17
+
18
+ ```python
19
+ import torch
20
+ from transformers import AutoModel, AutoTokenizer
21
+
22
+ tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-XD_Review")
23
+ model = AutoModel.from_pretrained("activebus/BERT-XD_Review")
24
+
25
+ ```
26
+
27
+
28
+ ## Evaluation Results
29
+
30
+ Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
31
+ `BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
32
+
33
+
34
+ ## Citation
35
+ If you find this work useful, please cite as following.
36
+ ```
37
+ @inproceedings{xu_bert2019,
38
+ title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
39
+ author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
40
+ booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
41
+ month = "jun",
42
+ year = "2019",
43
+ }
44
+ ```