Transformers documentation
HerBERT
This model was released on 2020-05-01 and added to Hugging Face Transformers on 2020-11-16.
HerBERT
Overview
The HerBERT model was proposed in KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words.
The abstract from the paper is the following:
In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.
This model was contributed by rmroczkowski. The original code can be found here.
Usage example
>>> from transformers import HerbertTokenizer, RobertaModel
>>> tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
>>> model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1")
>>> encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt")
>>> outputs = model(encoded_input)
>>> # HerBERT can also be loaded using AutoTokenizer and AutoModel:
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
>>> model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1")Herbert implementation is the same as
BERTexcept for the tokenization method. Refer to BERT documentation for API reference and examples.
HerbertTokenizer
class transformers.HerbertTokenizer
< source >( vocab: typing.Optional[dict] = None merges: typing.Optional[list] = None cls_token: str = '<s>' unk_token: str = '<unk>' pad_token: str = '<pad>' mask_token: str = '<mask>' sep_token: str = '</s>' vocab_file: typing.Optional[str] = None merges_file: typing.Optional[str] = None **kwargs )
Parameters
- vocab_file (
str) — Path to the vocabulary file. - merges_file (
str) — Path to the merges file. - cls_token (
str, optional, defaults to"<s>") — The classifier token. - unk_token (
str, optional, defaults to"<unk>") — The unknown token. - pad_token (
str, optional, defaults to"<pad>") — The padding token. - mask_token (
str, optional, defaults to"<mask>") — The mask token. - sep_token (
str, optional, defaults to"</s>") — The separator token. - vocab (
dict, optional) — Custom vocabulary dictionary. - merges (
list, optional) — Custom merges list.
Construct a BPE tokenizer for HerBERT (backed by HuggingFace’s tokenizers library).
Peculiarities:
- uses BERT’s pre-tokenizer: BertPreTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of a punctuation character will be treated separately.
This tokenizer inherits from TokenizersBackend which contains most of the methods. Users should refer to the superclass for more information regarding methods.
HerbertTokenizerFast
class transformers.HerbertTokenizer
< source >( vocab: typing.Optional[dict] = None merges: typing.Optional[list] = None cls_token: str = '<s>' unk_token: str = '<unk>' pad_token: str = '<pad>' mask_token: str = '<mask>' sep_token: str = '</s>' vocab_file: typing.Optional[str] = None merges_file: typing.Optional[str] = None **kwargs )
Parameters
- vocab_file (
str) — Path to the vocabulary file. - merges_file (
str) — Path to the merges file. - cls_token (
str, optional, defaults to"<s>") — The classifier token. - unk_token (
str, optional, defaults to"<unk>") — The unknown token. - pad_token (
str, optional, defaults to"<pad>") — The padding token. - mask_token (
str, optional, defaults to"<mask>") — The mask token. - sep_token (
str, optional, defaults to"</s>") — The separator token. - vocab (
dict, optional) — Custom vocabulary dictionary. - merges (
list, optional) — Custom merges list.
Construct a BPE tokenizer for HerBERT (backed by HuggingFace’s tokenizers library).
Peculiarities:
- uses BERT’s pre-tokenizer: BertPreTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of a punctuation character will be treated separately.
This tokenizer inherits from TokenizersBackend which contains most of the methods. Users should refer to the superclass for more information regarding methods.