File size: 2,391 Bytes
3edfb9c
e542684
 
3edfb9c
e542684
 
 
 
 
 
 
 
 
 
 
3edfb9c
e542684
 
 
a11a459
 
ee8ee25
e542684
a11a459
 
e542684
 
 
 
 
 
 
 
 
 
ab570dd
e542684
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81e5161
 
 
 
688cc02
81e5161
 
 
 
 
 
 
688cc02
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
language:
- en
license: mit
tags:
- token-classification
- entity-recognition
- foundation-model
- feature-extraction
- BERT
- generic
datasets:
- numind/NuNER
pipeline_tag: token-classification
inference: false
---

# SOTA Entity Recognition English Foundation Model by NuMind 🔥

 This is the **BERT** model from our [**Paper**](https://arxiv.org/abs/2402.15343): **NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data**

 <u>**This is the model used in Section 4.2 when comparing against TadNER.**</u>

 For other sections, [NuNER v1.0](https://huggingface.co/numind/NuNER-v1.0) is used.

**Checkout other models by NuMind:**
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)

## About

[bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) fine-tuned on [NuNER data](https://huggingface.co/datasets/numind/NuNER).

**Metrics:**

Read more about evaluation protocol datasets in Section 4.2 of our [paper](https://arxiv.org/abs/2402.15343).

## Usage

Embeddings can be used out of the box or fine-tuned on specific datasets. 

Get embeddings:


```python
import torch
import transformers


model = transformers.AutoModel.from_pretrained(
    'numind/NuNER-BERT-v1.0',
    output_hidden_states=True
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    'numind/NuNER-BERT-v1.0'
)

text = [
    "NuMind is an AI company based in Paris and USA.",
    "See other models from us on https://huggingface.co/numind"
]
encoded_input = tokenizer(
    text,
    return_tensors='pt',
    padding=True,
    truncation=True
)
output = model(**encoded_input)

# for better quality
emb = torch.cat(
    (output.hidden_states[-1], output.hidden_states[-7]),
    dim=2
)

# for better speed
# emb = output.hidden_states[-1]
```

## Citation

```
@misc{bogdanov2024nuner,
      title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, 
      author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
      year={2024},
      eprint={2402.15343},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```