--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_statistics results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.853733758 - name: NER Recall type: recall value: 0.8456530449 - name: NER F Score type: f_score value: 0.8496741892 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.9727831973 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.9186878782 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.9005160534 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.8923519379 --- English pipeline that provides statistics, readability and formality scores. | Feature | Description | | --- | --- | | **Name** | `en_statistics` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.1,<3.2.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner`, `syllables`, `formality`, `readability` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner`, `syllables`, `formality`, `readability` | | **Vectors** | 684830 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)
[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)
[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)
[GloVe Common Crawl](https://nlp.stanford.edu/projects/glove/) (Jeffrey Pennington, Richard Socher, and Christopher D. Manning) | | **License** | `MIT` | | **Author** | [Contentologie]() | ### Label Scheme
View label scheme (114 labels for 4 components) | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`senter`** | `I`, `S` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.93 | | `TAG_ACC` | 97.28 | | `DEP_UAS` | 91.87 | | `DEP_LAS` | 90.05 | | `ENTS_P` | 85.37 | | `ENTS_R` | 84.57 | | `ENTS_F` | 84.97 | | `SENTS_P` | 90.49 | | `SENTS_R` | 88.01 | | `SENTS_F` | 89.24 |