File size: 11,561 Bytes
bebb40b
961dc97
bebb40b
 
961dc97
 
 
bebb40b
 
 
 
 
 
 
961dc97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8374525
 
 
 
 
 
bebb40b
 
961dc97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af1598b
961dc97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bebb40b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
961dc97
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
---
license: apache-2.0
base_model: numind/NuNER-v1.0
tags:
- token-classification
- ner
- named-entity-recognition
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nuner-v1_ontonotes5
  results:
  - task:
      type: token-classification
      name: Named Entity Recognition
    dataset:
      name: OntoNotes5
      type: tner/ontonotes5
      split: eval
    metrics:
    - type: f1
      value: 0.890930568316052
      name: F1
    - type: precision
      value: 0.8777586206896552
      name: Precision
    - type: recall
      value: 0.9045038642622368
      name: Recall
    - type: accuracy
      value: 0.9818887790313181
      name: Accuracy
datasets:
- tner/ontonotes5
language:
- en
library_name: transformers
pipeline_tag: token-classification
widget:
- text: Concern and scepticism surround Niger uranium mining waste storage plans. Towering mounds dot the desert landscape in northern Niger's Arlit region, but they are heaps of partially radioactive waste left from four decades of operations at one of the world's biggest uranium mines. An ambitious 10-year scheme costing $160 million is underway to secure the waste and avoid risks to health and the environment, but many local people are worried or sceptical. France's nuclear giant Areva, now called Orano, worked the area under a subsidiary, the Akouta Mining Company (Cominak). Cominak closed the site in 2021 after extracting 75,000 tonnes of uranium, much of which went to fuelling the scores of nuclear reactors that provide the backbone of France's electricity supply. Cominak's director general Mahaman Sani Abdoulaye showcased the rehabilitation project to the first French journalists to visit the site since 2010, when seven Areva employees were kidnapped by jihadists.
- text: SE Michigan counties allege insulin gouging; Localities file lawsuit against pharmaceutical makers. Four metro Detroit counties filed federal lawsuits Wednesday against some of the nation's biggest pharmaceutical manufacturers and pharmacy benefit managers alleging illegal price fixing for insulin products. Macomb, Monroe, Wayne and Washtenaw counties filed the lawsuits in U.S. District Court in New Jersey against more than a dozen companies, including Lilly, Sanofi Aventis, Novo Nordisk, Express Scripts, Optum Rx and CVS Caremark, per their attorneys. "These are the first such lawsuits that have been filed in the state of Michigan and probably more to come," said attorney Melvin Butch Hollowell of the Miller Law Firm. He described the allegations during a news conference, saying that nationally "the pharmacies and manufacturers get together. They control about 90% of the market each, of the insulin market. They talk to each other secretly. And they jack up the prices through anticompetitive means. And what we've seen is over the past 20 years, when we talk about jacking up the prices, they jack them up 1,500% in the last 20 years. 1,500%."
- text: Foreign governments may be spying on your smartphone notifications, senator says. Washington (CNN)  Foreign governments have reportedly attempted to spy on iPhone and Android users through the mobile app notifications they receive on their smartphones - and the US government has forced Apple and Google to keep quiet about it, according to a top US senator. Through legal demands sent to the tech giants, governments have allegedly tried to force Apple and Google to turn over sensitive information that could include the contents of a notification - such as previews of a text message displayed on a lock screen, or an update about app activity, Oregon Democratic Sen. Ron Wyden said in a new report. Wyden's report reflects the latest example of long-running tensions between tech companies and governments over law enforcement demands, which have stretched on for more than a decade. Governments around the world have particularly battled with tech companies over encryption, which provides critical protections to users and businesses while in some cases preventing law enforcement from pursuing investigations into messages sent over the internet.
- text: Tech giants ‘could severely disable UK spooks from stopping online harms’. Silicon Valley tech giants’ actions could “severely disable” UK spooks from preventing harm caused by online paedophiles and fraudsters, Suella Braverman  has suggested. The Conservative former home secretary named Facebook owner Meta , and Apple, and their use of technologies such as end-to-end encryption as a threat to attempts to tackle digital crimes. She claimed the choice to back these technologies without “safeguards” could “enable and indeed facilitate some of the worst atrocities that our brave men and women in law enforcement agencies deal with every day”, as MPs  began considering changes to investigatory powers laws. The Investigatory Powers (Amendment) Bill  includes measures to make it easier for agencies to examine and retain bulk datasets, such as publicly available online telephone records, and would allow intelligence agencies to use internet connection records to aid detection of their targets. We know that the terrorists, the serious organised criminals, and fraudsters, and the online paedophiles, all take advantage of the dark web and encrypted spaces
- text: Camargo Corrêa asks Toffoli to suspend the fine agreed with Lava Jato. The Camargo Corrêa group has asked Justice Dias Toffoli to suspend the R$1.4 billion fine it agreed to pay in its leniency agreement under Operation Car Wash. The company asked for an extension of the minister's decisions that benefited J&F and Odebrecht. Like the other companies, it claimed that it suffered undue pressure from members of the Federal Public Prosecutor's Office (MPF) to close the deal. Much of the request is based on messages exchanged between prosecutors from the Curitiba task force and former judge Sergio Moro - Camargo Corrêa requested full access to the material, seized in Operation Spoofing, which arrested the hackers who broke into cell phones. The dialogues, according to the group's defense, indicate that the executives did not freely agree to the deal, since they were the targets of lawsuits and pre-trial detentions.
---

# numind/NuNER-v1.0 fine-tuned on OntoNotes5

This is a [NuNER](https://arxiv.org/abs/2402.15343) model fine-tuned on the [OntoNotes5](https://huggingface.co/datasets/tner/ontonotes5) dataset that can be used for Named Entity Recognition. NuNER model uses [RoBERTa-base](https://huggingface.co/FacebookAI/roberta-base) as the backbone encoder and it was trained on the [NuNER dataset](https://huggingface.co/datasets/numind/NuNER), which is a large and diverse dataset synthetically labeled by gpt-3.5-turbo-0301 of 1M sentences. This further pre-training phase allowed the generation of high quality token embeddings, a good starting point for fine-tuning on more specialized datasets.

## Model Details

The model was fine-tuned as a regular BERT-based model for NER task using HuggingFace Trainer class.

## Model labels

Entity Types: CARDINAL, DATE, PERSON, NORP, GPE, LAW, PERCENT, ORDINAL, MONEY, WORK_OF_ART, FAC, TIME, QUANTITY, PRODUCT, LANGUAGE, ORG, LOC, EVENT

## Uses

### Direct Use for Inference

```python
>>> from transformers import pipeline

>>> text = """Foreign governments may be spying on your smartphone notifications, senator says. Washington (CNN) — Foreign governments have reportedly attempted to spy on iPhone and Android users through the mobile app notifications they receive on their smartphones - and the US government has forced Apple and Google to keep quiet about it, according to a top US senator. Through legal demands sent to the tech giants, governments have allegedly tried to force Apple and Google to turn over sensitive information that could include the contents of a notification - such as previews of a text message displayed on a lock screen, or an update about app activity, Oregon Democratic Sen. Ron Wyden said in a new report. Wyden's report reflects the latest example of long-running tensions between tech companies and governments over law enforcement demands, which have stretched on for more than a decade. Governments around the world have particularly battled with tech companies over encryption, which provides critical protections to users and businesses while in some cases preventing law enforcement from pursuing investigations into messages sent over the internet."""

>>> classifier = pipeline(
    "ner",
    model="guishe/nuner-v1_ontonotes5",
    aggregation_strategy="simple",
)
>>> classifier(text)

[{'entity_group': 'GPE',
  'score': 0.99179757,
  'word': ' Washington',
  'start': 82,
  'end': 92},
 {'entity_group': 'ORG',
  'score': 0.9535868,
  'word': 'CNN',
  'start': 94,
  'end': 97},
 {'entity_group': 'PRODUCT',
  'score': 0.6833637,
  'word': ' iPhone',
  'start': 157,
  'end': 163},
 {'entity_group': 'PRODUCT',
  'score': 0.5540275,
  'word': ' Android',
  'start': 168,
  'end': 175},
 {'entity_group': 'GPE',
  'score': 0.98848885,
  'word': ' US',
  'start': 263,
  'end': 265},
 {'entity_group': 'ORG',
  'score': 0.9939406,
  'word': ' Apple',
  'start': 288,
  'end': 293},
 {'entity_group': 'ORG',
  'score': 0.9933014,
  'word': ' Google',
  'start': 298,
  'end': 304},
 {'entity_group': 'GPE',
  'score': 0.99083686,
  'word': ' US',
  'start': 348,
  'end': 350},
 {'entity_group': 'ORG',
  'score': 0.99349517,
  'word': ' Apple',
  'start': 449,
  'end': 454},
 {'entity_group': 'ORG',
  'score': 0.99239254,
  'word': ' Google',
  'start': 459,
  'end': 465},
 {'entity_group': 'GPE',
  'score': 0.99598336,
  'word': ' Oregon',
  'start': 649,
  'end': 655},
 {'entity_group': 'NORP',
  'score': 0.99030787,
  'word': ' Democratic',
  'start': 656,
  'end': 666},
 {'entity_group': 'PERSON',
  'score': 0.9957912,
  'word': ' Ron Wyden',
  'start': 672,
  'end': 681},
 {'entity_group': 'PERSON',
  'score': 0.83941424,
  'word': ' Wyden',
  'start': 704,
  'end': 709},
 {'entity_group': 'DATE',
  'score': 0.87746465,
  'word': ' more than a decade',
  'start': 869,
  'end': 887}]
```


## Training Details

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0781        | 1.0   | 936  | 0.0754          | 0.8392    | 0.8843 | 0.8612 | 0.9778   |
| 0.049         | 2.0   | 1873 | 0.0685          | 0.8597    | 0.8935 | 0.8763 | 0.9794   |
| 0.0357        | 3.0   | 2809 | 0.0714          | 0.8608    | 0.9016 | 0.8807 | 0.9806   |
| 0.027         | 4.0   | 3744 | 0.0728          | 0.8712    | 0.9000 | 0.8853 | 0.9811   |


### Framework versions

- Transformers 4.36.0
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2

- ## Citation

### BibTeX
```
@misc{bogdanov2024nuner,
      title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, 
      author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
      year={2024},
      eprint={2402.15343},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```