--- language: - en license: mit tags: - token-classification - entity-recognition - foundation-model - feature-extraction - BERT - generic datasets: - numind/NuNER pipeline_tag: token-classification inference: false --- # SOTA Entity Recognition English Foundation Model by NuMind đŸ”„ This is the **BERT** model from our [**Paper**](https://arxiv.org/abs/2402.15343): **NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data** **This is the model used in Section 4.2 when comparing against TadNER.** For other sections, [NuNER v1.0](https://huggingface.co/numind/NuNER-v1.0) is used. **Checkout other models by NuMind:** * SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1) * SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1) ## About [bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) fine-tuned on [NuNER data](https://huggingface.co/datasets/numind/NuNER). **Metrics:** Read more about evaluation protocol datasets in Section 4.2 of our [paper](https://arxiv.org/abs/2402.15343). ## Usage Embeddings can be used out of the box or fine-tuned on specific datasets. Get embeddings: ```python import torch import transformers model = transformers.AutoModel.from_pretrained( 'numind/NuNER-BERT-v1.0', output_hidden_states=True ) tokenizer = transformers.AutoTokenizer.from_pretrained( 'numind/NuNER-BERT-v1.0' ) text = [ "NuMind is an AI company based in Paris and USA.", "See other models from us on https://huggingface.co/numind" ] encoded_input = tokenizer( text, return_tensors='pt', padding=True, truncation=True ) output = model(**encoded_input) # for better quality emb = torch.cat( (output.hidden_states[-1], output.hidden_states[-7]), dim=2 ) # for better speed # emb = output.hidden_states[-1] ``` ## Citation ``` @misc{bogdanov2024nuner, title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, author={Sergei Bogdanov and Alexandre Constantin and TimothĂ©e Bernard and Benoit CrabbĂ© and Etienne Bernard}, year={2024}, eprint={2402.15343}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```