//- 💫 DOCS > USAGE > WHAT'S NEW IN V2.0 > NEW FEATURES

p
    |  This section contains an overview of the most important
    |  #[strong new features and improvements]. The #[+a("/api") API docs]
    |  include additional  deprecation notes. New methods and functions that
    |  were introduced in this version are marked with a
    |  #[span.u-text-tag.u-text-tag--spaced v2.0] tag.

+h(3, "features-models") Convolutional neural network models

+aside-code("Example", "bash")
    for _, lang in MODELS
        if lang != "xx"
            | python -m spacy download #{lang}  # default #{LANGUAGES[lang]} model!{'\n'}
    | python -m spacy download xx_ent_wiki_sm  # multi-language NER

p
    |  spaCy v2.0 features new neural models for tagging,
    |  parsing and entity recognition. The models have
    |  been designed and implemented from scratch specifically for spaCy, to
    |  give you an unmatched balance of speed, size and accuracy. The new
    |  models are #[strong 10&times; smaller], #[strong 20% more accurate],
    |  and #[strong even cheaper to run] than the previous generation.

p
    |  spaCy v2.0's new neural network models bring significant improvements in
    |  accuracy, especially for English Named Entity Recognition. The new
    |  #[+a("/models/en#en_core_web_lg") #[code en_core_web_lg]] model makes
    |  about #[strong 25% fewer mistakes] than the corresponding v1.x model and
    |  is within #[strong 1% of the current state-of-the-art]
    |  (#[+a("https://arxiv.org/pdf/1702.02098.pdf") Strubell et al., 2017]).
    |  The v2.0 models are also cheaper to run at scale, as they require
    |  #[strong under 1 GB of memory] per process.

+infobox
    |  #[+label-inline Usage:] #[+a("/models") Models directory],
    |  #[+a("/models/comparison") Models comparison],
    |  #[+a("#benchmarks") Benchmarks]

+h(3, "features-pipelines") Improved processing pipelines

+aside-code("Example").
    # Set custom attributes
    Doc.set_extension('my_attr', default=False)
    Token.set_extension('my_attr', getter=my_token_getter)
    assert doc._.my_attr, token._.my_attr

    # Add components to the pipeline
    my_component = lambda doc: doc
    nlp.add_pipe(my_component)

p
    |  It's now much easier to #[strong customise the pipeline] with your own
    |  components: functions that receive a #[code Doc] object, modify and
    |  return it. Extensions let you write any
    |  #[strong attributes, properties and methods] to the #[code Doc],
    |  #[code Token] and #[code Span]. You can add data, implement new
    |  features, integrate other libraries with spaCy or plug in your own
    |  machine learning models.

+image
    include ../../assets/img/pipeline.svg

+infobox
    |  #[+label-inline API:] #[+api("language") #[code Language]],
    |  #[+api("doc#set_extension") #[code Doc.set_extension]],
    |  #[+api("span#set_extension") #[code Span.set_extension]],
    |  #[+api("token#set_extension") #[code Token.set_extension]]
    |  #[+label-inline Usage:]
    |  #[+a("/usage/processing-pipelines") Processing pipelines]
    |  #[+label-inline Code:]
    |  #[+src("/usage/examples#section-pipeline") Pipeline examples]

+h(3, "features-text-classification") Text classification

+aside-code("Example").
    textcat = nlp.create_pipe('textcat')
    nlp.add_pipe(textcat, last=True)
    optimizer = nlp.begin_training()
    for itn in range(100):
        for doc, gold in train_data:
            nlp.update([doc], [gold], sgd=optimizer)
    doc = nlp(u'This is a text.')
    print(doc.cats)

p
    |  spaCy v2.0 lets you add text categorization models to spaCy pipelines.
    |  The model supports classification with multiple, non-mutually
    |  exclusive labels – so multiple labels can apply at once. You can
    |  change the model architecture rather easily, but by default, the
    |  #[code TextCategorizer] class uses a convolutional neural network to
    |  assign position-sensitive vectors to each word in the document.

+infobox
    |  #[+label-inline API:] #[+api("textcategorizer") #[code TextCategorizer]],
    |  #[+api("doc#attributes") #[code Doc.cats]],
    |  #[+api("goldparse#attributes") #[code GoldParse.cats]]#[br]
    |  #[+label-inline Usage:] #[+a("/usage/training#textcat") Training a text classication model]

+h(3, "features-hash-ids") Hash values instead of integer IDs

+aside-code("Example").
    doc = nlp(u'I love coffee')
    assert doc.vocab.strings[u'coffee'] == 3197928453018144401
    assert doc.vocab.strings[3197928453018144401] == u'coffee'

    beer_hash = doc.vocab.strings.add(u'beer')
    assert doc.vocab.strings[u'beer'] == beer_hash
    assert doc.vocab.strings[beer_hash] == u'beer'

p
    |  The #[+api("stringstore") #[code StringStore]] now resolves all strings
    |  to hash values instead of integer IDs. This means that the string-to-int
    |  mapping #[strong no longer depends on the vocabulary state], making a lot
    |  of workflows much simpler, especially during training. Unlike integer IDs
    |  in spaCy v1.x, hash values will #[strong always match] – even across
    |  models. Strings can now be added explicitly using the new
    |  #[+api("stringstore#add") #[code Stringstore.add]] method. A token's hash
    |  is available via #[code token.orth].

+infobox
    |  #[+label-inline API:] #[+api("stringstore") #[code StringStore]]
    |  #[+label-inline Usage:] #[+a("/usage/spacy-101#vocab") Vocab, hashes and lexemes 101]

+h(3, "features-vectors") Improved word vectors support

+aside-code("Example").
    for word, vector in vector_data:
        nlp.vocab.set_vector(word, vector)
    nlp.vocab.vectors.from_glove('/path/to/vectors')
    # keep 10000 unique vectors and remap the rest
    nlp.vocab.prune_vectors(10000)
    nlp.to_disk('/model')

p
    |  The new #[+api("vectors") #[code Vectors]] class helps the
    |  #[code Vocab] manage the vectors assigned to strings, and lets you
    |  assign vectors individually, or
    |  #[+a("/usage/vectors-similarity#custom-loading-glove") load in GloVe vectors]
    |  from a directory. To help you strike a good balance between coverage
    |  and memory usage, the #[code Vectors] class lets you map
    |  #[strong multiple keys] to the #[strong same row] of the table. If
    |  you're using the #[+api("cli#vocab") #[code spacy vocab]] command to
    |  create a vocabulary, pruning the vectors will be taken care of
    |  automatically. Otherwise, you can use the new
    |  #[+api("vocab#prune_vectors") #[code Vocab.prune_vectors]].

+infobox
    |  #[+label-inline API:] #[+api("vectors") #[code Vectors]],
    |  #[+api("vocab") #[code Vocab]]
    |  #[+label-inline Usage:] #[+a("/usage/vectors-similarity") Word vectors and semantic similarity]

+h(3, "features-serializer") Saving, loading and serialization

+aside-code("Example").
    nlp = spacy.load('en') # shortcut link
    nlp = spacy.load('en_core_web_sm') # package
    nlp = spacy.load('/path/to/en') # unicode path
    nlp = spacy.load(Path('/path/to/en')) # pathlib Path

    nlp.to_disk('/path/to/nlp')
    nlp = English().from_disk('/path/to/nlp')

p
    |  spay's serialization API has been made consistent across classes and
    |  objects. All container classes, i.e. #[code Language], #[code Doc],
    |  #[code Vocab] and #[code StringStore] now have a #[code to_bytes()],
    |  #[code from_bytes()], #[code to_disk()] and #[code from_disk()] method
    |  that supports the Pickle protocol.

p
    |  The improved #[code spacy.load] makes loading models easier and more
    |  transparent. You can load a model by supplying its
    |  #[+a("/usage/models#usage") shortcut link], the name of an installed
    |  #[+a("/models") model package] or a path. The #[code Language] class to
    |  initialise will be determined based on the model's settings. For a blank l
    |  anguage, you can import the class directly, e.g.
    |  #[code.u-break from spacy.lang.en import English] or use
    |  #[+api("spacy#blank") #[code spacy.blank()]].

+infobox
    |  #[+label-inline API:] #[+api("spacy#load") #[code spacy.load]],
    |  #[+api("language#to_disk") #[code Language.to_disk]]
    |  #[+label-inline Usage:] #[+a("/usage/models#usage") Models],
    |  #[+a("/usage/training#saving-loading") Saving and loading]

+h(3, "features-displacy") displaCy visualizer with Jupyter support

+aside-code("Example").
    from spacy import displacy
    doc = nlp(u'This is a sentence about Facebook.')
    displacy.serve(doc, style='dep') # run the web server
    html = displacy.render(doc, style='ent') # generate HTML

p
    |  Our popular dependency and named entity visualizers are now an official
    |  part of the spaCy library. displaCy can run a simple web server, or
    |  generate raw HTML markup or SVG files to be exported. You can pass in one
    |  or more docs, and customise the style. displaCy also auto-detects whether
    |  you're running #[+a("https://jupyter.org") Jupyter] and will render the
    |  visualizations in your notebook.

+infobox
    |  #[+label-inline API:] #[+api("top-level#displacy") #[code displacy]]
    |  #[+label-inline Usage:] #[+a("/usage/visualizers") Visualizing spaCy]

+h(3, "features-language") Improved language data and lazy loading

p
    |  Language-specfic data now lives in its own submodule, #[code spacy.lang].
    |  Languages are lazy-loaded, i.e. only loaded when you import a
    |  #[code Language] class, or load a model that initialises one. This allows
    |  languages to contain more custom data, e.g. lemmatizer lookup tables, or
    |  complex regular expressions. The language data has also been tidied up
    |  and simplified. spaCy now also supports simple lookup-based
    |  lemmatization – and #[strong #{LANG_COUNT} languages] in total!

+infobox
    |  #[+label-inline API:] #[+api("language") #[code Language]]
    |  #[+label-inline Code:] #[+src(gh("spaCy", "spacy/lang")) #[code spacy/lang]]
    |  #[+label-inline Usage:] #[+a("/usage/adding-languages") Adding languages]

+h(3, "features-matcher") Revised matcher API and phrase matcher

+aside-code("Example").
    from spacy.matcher import Matcher, PhraseMatcher

    matcher = Matcher(nlp.vocab)
    matcher.add('HEARTS', None, [{'ORTH': '❤️', 'OP': '+'}])

    phrasematcher = PhraseMatcher(nlp.vocab)
    phrasematcher.add('OBAMA', None, nlp(u"Barack Obama"))

p
    |  Patterns can now be added to the matcher by calling
    |  #[+api("matcher#add") #[code matcher.add()]] with a match ID, an optional
    |  callback function to be invoked on each match, and one or more patterns.
    |  This allows you to write powerful, pattern-specific logic using only one
    |  matcher. For example, you might only want to merge some entity types,
    |  and set custom flags for other matched patterns. The new
    |  #[+api("phrasematcher") #[code PhraseMatcher]] lets you efficiently
    |  match very large terminology lists using #[code Doc] objects as match
    |  patterns.

+infobox
    |  #[+label-inline API:] #[+api("matcher") #[code Matcher]],
    |  #[+api("phrasematcher") #[code PhraseMatcher]]
    |  #[+label-inline Usage:]
    |  #[+a("/usage/linguistic-features#rule-based-matching") Rule-based matching]
