Token Classification
GLiNER
multilingual
NER
GLiNER
information extraction
entity extraction
encoder
Ihor commited on
Commit
ed920d8
1 Parent(s): 2fb11aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -3
README.md CHANGED
@@ -1,3 +1,115 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - multilingual
5
+ library_name: gliner
6
+ datasets:
7
+ - urchade/pile-mistral-v0.1
8
+ - numind/NuNER
9
+ - knowledgator/GLINER-multi-task-synthetic-data
10
+ pipeline_tag: token-classification
11
+ tags:
12
+ - NER
13
+ - GLiNER
14
+ - information extraction
15
+ - entity extraction
16
+ - encoder
17
+ ---
18
+
19
+ # About
20
+
21
+ GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
22
+
23
+ This particular version utilize bi-encoder architecture with post-fusion, where textual encoder is [DeBERTa v3 base](microsoft/deberta-v3-base) and entity label encoder is sentence transformer - [BGE-small-en](https://huggingface.co/BAAI/bge-small-en-v1.5).
24
+
25
+ Such architecture brings several advantages over uni-encoder GLiNER:
26
+ * An unlimited amount of entities can be recognized at a single time;
27
+ * Faster inference if entity embeddings are preprocessed;
28
+ * Better generalization to unseen entities;
29
+
30
+ Post fusion strategy brings advantages over classical bi-encoder enabling better inter-label understanding.
31
+
32
+ ### Usage
33
+ Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
34
+
35
+ ```python
36
+ from gliner import GLiNER
37
+
38
+ model = GLiNER.from_pretrained("knowledgator/gliner-poly-base-v1.0")
39
+
40
+ text = """
41
+ Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
42
+ """
43
+
44
+ labels = ["person", "award", "date", "competitions", "teams"]
45
+
46
+ entities = model.predict_entities(text, labels, threshold=0.25)
47
+
48
+ for entity in entities:
49
+ print(entity["text"], "=>", entity["label"])
50
+ ```
51
+
52
+ ```
53
+ Cristiano Ronaldo dos Santos Aveiro => person
54
+ 5 February 1985 => date
55
+ Al Nassr => teams
56
+ Portugal national team => teams
57
+ Ballon d'Or => award
58
+ UEFA Men's Player of the Year Awards => award
59
+ European Golden Shoes => award
60
+ UEFA Champions Leagues => competitions
61
+ UEFA European Championship => competitions
62
+ UEFA Nations League => competitions
63
+ Champions League => competitions
64
+ European Championship => competitions
65
+ ```
66
+
67
+ If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet:
68
+
69
+ ```python
70
+ labels = ["your entities"]
71
+ texts = ["your texts"]
72
+
73
+ entity_embeddings = model.encode_labels(labels, batch_size = 8)
74
+
75
+ outputs = model.batch_predict_with_embeds([text], entity_embeddings, labels)
76
+ ```
77
+
78
+ ### Benchmarks
79
+ Below you can see the table with benchmarking results on various named entity recognition datasets:
80
+
81
+ | Dataset | Score |
82
+ |---------|-------|
83
+ | ACE 2004 | 25.4% |
84
+ | ACE 2005 | 27.2% |
85
+ | AnatEM | 17.7% |
86
+ | Broad Tweet Corpus | 70.2% |
87
+ | CoNLL 2003 | 67.8% |
88
+ | FabNER | 22.9% |
89
+ | FindVehicle | 40.2% |
90
+ | GENIA_NER | 47.7% |
91
+ | HarveyNER | 15.5% |
92
+ | MultiNERD | 64.5% |
93
+ | Ontonotes | 28.7% |
94
+ | PolyglotNER | 47.5% |
95
+ | TweetNER7 | 39.3% |
96
+ | WikiANN en | 56.7% |
97
+ | WikiNeural | 80.0% |
98
+ | bc2gm | 56.2% |
99
+ | bc4chemd | 48.7% |
100
+ | bc5cdr | 60.5% |
101
+ | ncbi | 53.5% |
102
+ | **Average** | **45.8%** |
103
+ |||
104
+ | CrossNER_AI | 48.9% |
105
+ | CrossNER_literature | 64.0% |
106
+ | CrossNER_music | 68.7% |
107
+ | CrossNER_politics | 69.0% |
108
+ | CrossNER_science | 62.7% |
109
+ | mit-movie | 40.3% |
110
+ | mit-restaurant | 36.2% |
111
+ | **Average (zero-shot benchmark)** | **55.7%** |
112
+
113
+ ### Join Our Discord
114
+
115
+ Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).