asahi417 nazneen commited on
Commit
cad2e27
1 Parent(s): 122d8bf

model documentation (#2)

Browse files

- model documentation (8f5a36dbfb5a272c98ae43de0a2b9e8a02924fc5)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +192 -4
README.md CHANGED
@@ -1,11 +1,199 @@
1
- # XLM-RoBERTa for NER
2
- XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
3
 
4
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  from transformers import AutoTokenizer, AutoModelForTokenClassification
7
 
8
  tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
9
 
10
  model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
11
- ```
 
 
 
 
 
 
1
 
2
+ ---
3
+ language:
4
+ - en
5
+ ---
6
+ # Model Card for XLM-RoBERTa for NER
7
+
8
+ XLM-RoBERTa finetuned on NER.
9
+
10
+ # Model Details
11
+
12
+ ## Model Description
13
+
14
+ XLM-RoBERTa finetuned on NER.
15
+ - **Developed by:** Asahi Ushio
16
+ - **Shared by [Optional]:** Hugging Face
17
+ - **Model type:** Token Classification
18
+ - **Language(s) (NLP):** en
19
+ - **License:** More information needed
20
+ - **Related Models:** XLM-RoBERTa
21
+ - **Parent Model:** XLM-RoBERTa
22
+ - **Resources for more information:**
23
+ - [GitHub Repo](https://github.com/asahi417/tner)
24
+ - [Associated Paper](https://arxiv.org/abs/2209.12616)
25
+ - [Space](https://huggingface.co/spaces/akdeniz27/turkish-named-entity-recognition)
26
+
27
+ # Uses
28
+
29
+
30
+ ## Direct Use
31
+ Token Classification
32
+
33
+
34
+ ## Downstream Use [Optional]
35
+
36
+ This model can be used in conjunction with the [tner library](https://github.com/asahi417/tner).
37
+
38
+ ## Out-of-Scope Use
39
+
40
+
41
+ The model should not be used to intentionally create hostile or alienating environments for people.
42
+
43
+ # Bias, Risks, and Limitations
44
+
45
+
46
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
47
+
48
+
49
+ ## Recommendations
50
+
51
+
52
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
53
+
54
+
55
+ # Training Details
56
+
57
+ ## Training Data
58
+
59
+ An NER dataset contains a sequence of tokens and tags for each split (usually `train`/`validation`/`test`),
60
+ ```python
61
+ {
62
+ 'train': {
63
+ 'tokens': [
64
+ ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
65
+ ['From', 'Green', 'Newsfeed', ':', 'AHFA', 'extends', 'deadline', 'for', 'Sage', 'Award', 'to', 'Nov', '.', '5', 'http://tinyurl.com/24agj38'], ...
66
+ ],
67
+ 'tags': [
68
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
69
+ [0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...
70
+ ]
71
+ },
72
+ 'validation': ...,
73
+ 'test': ...,
74
+ }
75
  ```
76
+ with a dictionary to map a label to its index (`label2id`) as below.
77
+ ```python
78
+ {"O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8}
79
+ ```
80
+
81
+
82
+
83
+
84
+ ## Training Procedure
85
+
86
+ ### Preprocessing
87
+
88
+ More information needed
89
+
90
+ ### Speeds, Sizes, Times
91
+
92
+ **Layer_norm_eps:** 1e-05,
93
+ **Num_attention_heads:** 12,
94
+ **Num_hidden_layers:** 12,
95
+ **Vocab_size:** 250002
96
+
97
+ # Evaluation
98
+
99
+
100
+ ## Testing Data, Factors & Metrics
101
+
102
+ ### Testing Data
103
+
104
+ See [dataset card](https://github.com/asahi417/tner/blob/master/DATASET_CARD.md) for full dataset lists
105
+
106
+ ### Factors
107
+ More information needed
108
+
109
+ ### Metrics
110
+
111
+ More information needed
112
+
113
+ ## Results
114
+
115
+ More information needed
116
+
117
+ # Model Examination
118
+ More information needed
119
+
120
+ # Environmental Impact
121
+
122
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
123
+
124
+ - **Hardware Type:** More information needed
125
+ - **Hours used:** More information needed
126
+ - **Cloud Provider:** More information needed
127
+ - **Compute Region:** More information needed
128
+ - **Carbon Emitted:** More information needed
129
+
130
+ # Technical Specifications [optional]
131
+
132
+ ## Model Architecture and Objective
133
+
134
+ More information needed
135
+
136
+ ## Compute Infrastructure
137
+ More information needed
138
+
139
+ ### Hardware
140
+
141
+ More information needed
142
+
143
+ ### Software
144
+
145
+ More information needed
146
+
147
+ # Citation
148
+
149
+
150
+ **BibTeX:**
151
+
152
+ ```
153
+ @inproceedings{ushio-camacho-collados-2021-ner,
154
+ title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
155
+ author = "Ushio, Asahi and
156
+ Camacho-Collados, Jose",
157
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
158
+ month = apr,
159
+ year = "2021",
160
+ address = "Online",
161
+ publisher = "Association for Computational Linguistics",
162
+ url = "https://www.aclweb.org/anthology/2021.eacl-demos.7",
163
+ pages = "53--62",
164
+ }
165
+ ```
166
+
167
+
168
+ # Glossary [optional]
169
+
170
+ More information needed
171
+
172
+ # More Information [optional]
173
+ More information needed
174
+
175
+ # Model Card Authors [optional]
176
+
177
+ Asahi Ushio in collaboration with Ezi Ozoani and the Hugging Face team.
178
+
179
+ # Model Card Contact
180
+
181
+ More information needed
182
+
183
+ # How to Get Started with the Model
184
+
185
+ Use the code below to get started with the model.
186
+
187
+ <details>
188
+ <summary> Click to expand </summary>
189
+
190
+ ```python
191
  from transformers import AutoTokenizer, AutoModelForTokenClassification
192
 
193
  tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
194
 
195
  model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
196
+
197
+ ```
198
+
199
+ </details>