en_core_web_sm / README.md
1
---
2
tags:
3
- spacy
4
- token-classification
5
language:
6
- en
7
license: mit
8
model-index:
9
- name: en_core_web_sm
10
  results:
11
  - task:
12
      name: NER
13
      type: token-classification
14
    metrics:
15
    - name: NER Precision
16
      type: precision
17
      value: 0.8463095057
18
    - name: NER Recall
19
      type: recall
20
      value: 0.8377904647
21
    - name: NER F Score
22
      type: f_score
23
      value: 0.8420284384
24
  - task:
25
      name: POS
26
      type: token-classification
27
    metrics:
28
    - name: POS Accuracy
29
      type: accuracy
30
      value: 0.9725066923
31
  - task:
32
      name: SENTER
33
      type: token-classification
34
    metrics:
35
    - name: SENTER Precision
36
      type: precision
37
      value: 0.9205049471
38
    - name: SENTER Recall
39
      type: recall
40
      value: 0.8899003892
41
    - name: SENTER F Score
42
      type: f_score
43
      value: 0.904943986
44
  - task:
45
      name: UNLABELED_DEPENDENCIES
46
      type: token-classification
47
    metrics:
48
    - name: Unlabeled Dependencies Accuracy
49
      type: accuracy
50
      value: 0.9166876131
51
  - task:
52
      name: LABELED_DEPENDENCIES
53
      type: token-classification
54
    metrics:
55
    - name: Labeled Dependencies Accuracy
56
      type: accuracy
57
      value: 0.9166876131
58
---
59
### Details: https://spacy.io/models/en#en_core_web_sm
60
61
English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
62
63
| Feature | Description |
64
| --- | --- |
65
| **Name** | `en_core_web_sm` |
66
| **Version** | `3.2.0` |
67
| **spaCy** | `>=3.2.0,<3.3.0` |
68
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
69
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
70
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
71
| **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University) |
72
| **License** | `MIT` |
73
| **Author** | [Explosion](https://explosion.ai) |
74
75
### Label Scheme
76
77
<details>
78
79
<summary>View label scheme (114 labels for 4 components)</summary>
80
81
| Component | Labels |
82
| --- | --- |
83
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
84
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
85
| **`senter`** | `I`, `S` |
86
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
87
88
</details>
89
90
### Accuracy
91
92
| Type | Score |
93
| --- | --- |
94
| `TOKEN_ACC` | 99.93 |
95
| `TOKEN_P` | 99.57 |
96
| `TOKEN_R` | 99.58 |
97
| `TOKEN_F` | 99.57 |
98
| `TAG_ACC` | 97.25 |
99
| `SENTS_P` | 92.05 |
100
| `SENTS_R` | 88.99 |
101
| `SENTS_F` | 90.49 |
102
| `DEP_UAS` | 91.67 |
103
| `DEP_LAS` | 89.80 |
104
| `ENTS_P` | 84.63 |
105
| `ENTS_R` | 83.78 |
106
| `ENTS_F` | 84.20 |