metadata
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_web_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8574246409
- name: NER Recall
type: recall
value: 0.8490084135
- name: NER F Score
type: f_score
value: 0.8531957725
- task:
name: POS
type: token-classification
metrics:
- name: POS Accuracy
type: accuracy
value: 0.9741780493
- task:
name: SENTER
type: token-classification
metrics:
- name: SENTER Precision
type: precision
value: 0.9179358172
- name: SENTER Recall
type: recall
value: 0.8906260307
- name: SENTER F Score
type: f_score
value: 0.9040747313
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Dependencies Accuracy
type: accuracy
value: 0.9200593914
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Dependencies Accuracy
type: accuracy
value: 0.9200593914
Details: https://spacy.io/models/en#en_core_web_lg
English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
Feature | Description |
---|---|
Name | en_core_web_lg |
Version | 3.2.0 |
spaCy | >=3.2.0,<3.3.0 |
Default Pipeline | tok2vec , tagger , parser , attribute_ruler , lemmatizer , ner |
Components | tok2vec , tagger , parser , senter , attribute_ruler , lemmatizer , ner |
Vectors | 684830 keys, 684830 unique vectors (300 dimensions) |
Sources | OntoNotes 5 (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston) ClearNLP Constituent-to-Dependency Conversion (Emory University) WordNet 3.0 (Princeton University) GloVe Common Crawl (Jeffrey Pennington, Richard Socher, and Christopher D. Manning) |
License | MIT |
Author | Explosion |
Label Scheme
View label scheme (114 labels for 4 components)
Component | Labels |
---|---|
tagger |
$ , '' , , , -LRB- , -RRB- , . , : , ADD , AFX , CC , CD , DT , EX , FW , HYPH , IN , JJ , JJR , JJS , LS , MD , NFP , NN , NNP , NNPS , NNS , PDT , POS , PRP , PRP$ , RB , RBR , RBS , RP , SYM , TO , UH , VB , VBD , VBG , VBN , VBP , VBZ , WDT , WP , WP$ , WRB , XX , ```` |
parser |
ROOT , acl , acomp , advcl , advmod , agent , amod , appos , attr , aux , auxpass , case , cc , ccomp , compound , conj , csubj , csubjpass , dative , dep , det , dobj , expl , intj , mark , meta , neg , nmod , npadvmod , nsubj , nsubjpass , nummod , oprd , parataxis , pcomp , pobj , poss , preconj , predet , prep , prt , punct , quantmod , relcl , xcomp |
senter |
I , S |
ner |
CARDINAL , DATE , EVENT , FAC , GPE , LANGUAGE , LAW , LOC , MONEY , NORP , ORDINAL , ORG , PERCENT , PERSON , PRODUCT , QUANTITY , TIME , WORK_OF_ART |
Accuracy
Type | Score |
---|---|
TOKEN_ACC |
99.93 |
TOKEN_P |
99.57 |
TOKEN_R |
99.58 |
TOKEN_F |
99.57 |
TAG_ACC |
97.42 |
SENTS_P |
91.79 |
SENTS_R |
89.06 |
SENTS_F |
90.41 |
DEP_UAS |
92.01 |
DEP_LAS |
90.22 |
ENTS_P |
85.74 |
ENTS_R |
84.90 |
ENTS_F |
85.32 |