hhschu nazneen commited on
Commit
88c7268
1 Parent(s): b60d566

model documentation (#1)

Browse files

- model documentation (a653578ae8913b6593b9eaedd0d7603e812d5ca1)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +163 -2
README.md CHANGED
@@ -5,10 +5,171 @@ tags:
5
  - mobilebert
6
  datasets:
7
  - multi_nli
 
8
  metrics:
9
  - accuracy
10
  ---
11
 
12
- # MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
13
 
14
- This model is the Multi-Genre Natural Language Inference (MNLI) fine-turned version of the [uncased MobileBERT model](https://huggingface.co/google/mobilebert-uncased).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - mobilebert
6
  datasets:
7
  - multi_nli
8
+
9
  metrics:
10
  - accuracy
11
  ---
12
 
 
13
 
14
+ # Model Card for MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
15
+
16
+ # Model Details
17
+
18
+ ## Model Description
19
+
20
+ This model is the Multi-Genre Natural Language Inference (MNLI) fine-turned version of the [uncased MobileBERT model](https://huggingface.co/google/mobilebert-uncased).
21
+
22
+ - **Developed by:** Typeform
23
+ - **Shared by [Optional]:** Typeform
24
+ - **Model type:** Zero-Shot-Classification
25
+ - **Language(s) (NLP):** English
26
+ - **License:** More information needed
27
+ - **Parent Model:** [uncased MobileBERT model](https://huggingface.co/google/mobilebert-uncased).
28
+ - **Resources for more information:** More information needed
29
+
30
+
31
+
32
+ # Uses
33
+
34
+
35
+ ## Direct Use
36
+ This model can be used for the task of zero-shot classification
37
+
38
+ ## Downstream Use [Optional]
39
+
40
+ More information needed.
41
+
42
+ ## Out-of-Scope Use
43
+
44
+ The model should not be used to intentionally create hostile or alienating environments for people.
45
+
46
+ # Bias, Risks, and Limitations
47
+
48
+
49
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
50
+
51
+
52
+
53
+ ## Recommendations
54
+
55
+
56
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
57
+
58
+ # Training Details
59
+
60
+ ## Training Data
61
+
62
+ See [the multi_nli dataset card](https://huggingface.co/datasets/multi_nli) for more information.
63
+
64
+
65
+ ## Training Procedure
66
+
67
+
68
+ ### Preprocessing
69
+
70
+ More information needed
71
+
72
+
73
+
74
+
75
+
76
+ ### Speeds, Sizes, Times
77
+ More information needed
78
+
79
+
80
+ # Evaluation
81
+
82
+
83
+ ## Testing Data, Factors & Metrics
84
+
85
+ ### Testing Data
86
+
87
+ See [the multi_nli dataset card](https://huggingface.co/datasets/multi_nli) for more information.
88
+
89
+
90
+ ### Factors
91
+ More information needed
92
+
93
+ ### Metrics
94
+
95
+ More information needed
96
+
97
+
98
+ ## Results
99
+
100
+ More information needed
101
+
102
+
103
+ # Model Examination
104
+
105
+ More information needed
106
+
107
+ # Environmental Impact
108
+
109
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
110
+
111
+ - **Hardware Type:** More information needed
112
+ - **Hours used:** More information needed
113
+ - **Cloud Provider:** More information needed
114
+ - **Compute Region:** More information needed
115
+ - **Carbon Emitted:** More information needed
116
+
117
+ # Technical Specifications [optional]
118
+
119
+ ## Model Architecture and Objective
120
+
121
+ More information needed
122
+
123
+ ## Compute Infrastructure
124
+
125
+ More information needed
126
+
127
+ ### Hardware
128
+
129
+
130
+ More information needed
131
+
132
+ ### Software
133
+
134
+ More information needed.
135
+
136
+ # Citation
137
+
138
+
139
+ **BibTeX:**
140
+
141
+ More information needed
142
+
143
+
144
+
145
+ # Glossary [optional]
146
+ More information needed
147
+
148
+ # More Information [optional]
149
+ More information needed
150
+
151
+
152
+ # Model Card Authors [optional]
153
+
154
+ Typeform in collaboration with Ezi Ozoani and the Hugging Face team
155
+
156
+
157
+ # Model Card Contact
158
+
159
+ More information needed
160
+
161
+ # How to Get Started with the Model
162
+
163
+ Use the code below to get started with the model.
164
+
165
+ <details>
166
+ <summary> Click to expand </summary>
167
+
168
+ ```python
169
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
170
+
171
+ tokenizer = AutoTokenizer.from_pretrained("typeform/mobilebert-uncased-mnli")
172
+
173
+ model = AutoModelForSequenceClassification.from_pretrained("typeform/mobilebert-uncased-mnli")
174
+ ```
175
+ </details>