model documentation

#2
by nazneen - opened
Files changed (1) hide show
  1. README.md +167 -49
README.md CHANGED
@@ -1,51 +1,169 @@
1
- This model provides a MobileBERT [(Sun et al., 2020)](https://arxiv.org/abs/2004.02984) fine-tuned on the SST data with three sentiments (0 -- negative, 1 -- neutral, and 2 -- positive).
2
-
3
- ## Example Usage
4
-
5
- Below, we provide illustrations on how to use this model to make sentiment predictions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  ```python
8
- import torch
9
- from transformers import AutoTokenizer, AutoConfig, MobileBertForSequenceClassification
10
- # load model
11
- model_name = r'cambridgeltl/sst_mobilebert-uncased'
12
- tokenizer = AutoTokenizer.from_pretrained(model_name)
13
- config = AutoConfig.from_pretrained(model_name)
14
- model = MobileBertForSequenceClassification.from_pretrained(model_name, config=config)
15
- model.eval()
16
- '''
17
- labels:
18
- 0 -- negative
19
- 1 -- neutral
20
- 2 -- positive
21
- '''
22
-
23
- # prepare exemplar sentences
24
- batch_sentences = [
25
- "in his first stab at the form , jacquot takes a slightly anarchic approach that works only sporadically .",
26
- "a valueless kiddie paean to pro basketball underwritten by the nba .",
27
- "a very well-made , funny and entertaining picture .",
28
- ]
29
-
30
- # prepare input
31
- inputs = tokenizer(batch_sentences, max_length=256, truncation=True, padding=True, return_tensors='pt')
32
- input_ids, attention_mask = inputs.input_ids, inputs.attention_mask
33
-
34
- # make predictions
35
- outputs = model(input_ids=input_ids, attention_mask=attention_mask)
36
- predictions = torch.argmax(outputs.logits, dim = -1)
37
- print (predictions)
38
- # tensor([1, 0, 2])
39
- ```
40
-
41
- ## Citation:
42
- If you find this model useful, please kindly cite our model as
43
-
44
- ```bibtex
45
- @misc{susstmobilebert,
46
- author = {Su, Yixuan},
47
- title = {A MobileBERT Fine-tuned on SST},
48
- howpublished = {\url{https://huggingface.co/cambridgeltl/sst_mobilebert-uncased}},
49
- year = 2022
50
- }
51
- ```
 
1
+ ---
2
+ tags:
3
+ - mobilebert
4
+ ---
5
+ # Model Card for sst_mobilebert-uncased
6
+
7
+ # Model Details
8
+
9
+ ## Model Description
10
+
11
+
12
+ - **Developed by:** Zhiqing Sun1∗ , Hongkun Yu2 , Xiaodan Song2 , Renjie Liu2 , Yiming Yang1 , Denny Zhou
13
+ - **Shared by [Optional]:** [Vasily Shamporov](https://huggingface.co/vshampor)
14
+ - **Model type:** Text Classification
15
+ - **Language(s) (NLP):** More information needed
16
+ - **License:** More information needed
17
+ - **Related Models:** MobileBERT
18
+ - **Parent Model:** BERT
19
+ - **Resources for more information:**
20
+ - [Associated Paper](https://arxiv.org/pdf/2004.02984.pdf)
21
+
22
+ # Uses
23
+
24
+
25
+ ## Direct Use
26
+
27
+ This model can be used for the task of SequenceClassification
28
+
29
+ ## Downstream Use [Optional]
30
+
31
+ More information needed
32
+
33
+ ## Out-of-Scope Use
34
+
35
+ The model should not be used to intentionally create hostile or alienating environments for people.
36
+
37
+ # Bias, Risks, and Limitations
38
+
39
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
40
+
41
+
42
+ ## Recommendations
43
+
44
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
45
+ MobileBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. See [MobileBERT model documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mobilebert#transformers.MobileBertForSequenceClassification) for more information.
46
+
47
+
48
+ # Training Details
49
+
50
+ ## Training Data
51
+
52
+ More information needed
53
+
54
+ ## Training Procedure
55
+
56
+
57
+ ### Preprocessing
58
+
59
+ More information needed
60
+
61
+ ### Speeds, Sizes, Times
62
+
63
+ More information needed
64
+
65
+ # Evaluation
66
+
67
+
68
+ ## Testing Data, Factors & Metrics
69
+
70
+ ### Testing Data
71
+
72
+ More information needed
73
+
74
+ ### Factors
75
+
76
+
77
+ ### Metrics
78
+
79
+ More information needed
80
+ ## Results
81
+
82
+ More information needed
83
+
84
+ # Model Examination
85
+
86
+ More information needed
87
+
88
+ # Environmental Impact
89
+
90
+
91
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
92
+
93
+ - **Hardware Type:** More information needed
94
+ - **Hours used:** More information needed
95
+ - **Cloud Provider:** More information needed
96
+ - **Compute Region:** More information needed
97
+ - **Carbon Emitted:** More information needed
98
+
99
+ # Technical Specifications [optional]
100
+
101
+ ## Model Architecture and Objective
102
+
103
+ More information needed
104
+
105
+ ## Compute Infrastructure
106
+
107
+ More information needed
108
+
109
+ ### Hardware
110
+
111
+ More information needed
112
+
113
+ ### Software
114
+ More information needed
115
+
116
+ # Citation
117
+
118
+
119
+ **BibTeX:**
120
+ ```
121
+ @misc{https://doi.org/10.48550/arxiv.2004.02984,
122
+ doi = {10.48550/ARXIV.2004.02984},
123
+
124
+ url = {https://arxiv.org/abs/2004.02984},
125
+
126
+ author = {Sun, Zhiqing and Yu, Hongkun and Song, Xiaodan and Liu, Renjie and Yang, Yiming and Zhou, Denny},
127
+
128
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
129
+
130
+ title = {MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices},
131
+
132
+ publisher = {arXiv},
133
+
134
+ year = {2020},
135
+ ```
136
+
137
+
138
+ # Glossary [optional]
139
+ More information needed
140
+
141
+ # More Information [optional]
142
+
143
+ More information needed
144
+
145
+ # Model Card Authors [optional]
146
+
147
+
148
+ Language Technology Lab @University of Cambridge in collaboration with Ezi Ozoani and the Hugging Face team
149
+
150
+ # Model Card Contact
151
+
152
+ More information needed
153
+
154
+ # How to Get Started with the Model
155
+
156
+ Use the code below to get started with the model.
157
+
158
+ <details>
159
+ <summary> Click to expand </summary>
160
 
161
  ```python
162
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
163
+
164
+ tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/sst_mobilebert-uncased")
165
+
166
+ model = AutoModelForSequenceClassification.from_pretrained("cambridgeltl/sst_mobilebert-uncased")
167
+
168
+ ```
169
+ </details>