nielsr HF staff commited on
Commit
afe7189
1 Parent(s): 381d37c

Update model card

Browse files
Files changed (1) hide show
  1. README.md +13 -17
README.md CHANGED
@@ -9,17 +9,26 @@ datasets:
9
 
10
  # CANINE-s (CANINE pre-trained with subword loss)
11
 
12
- TODO
13
 
14
  Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.
15
 
16
  ## Model description
17
 
18
- TODO
 
 
 
 
 
 
 
19
 
20
  ## Intended uses & limitations
21
 
22
- TODO
 
 
23
 
24
  ### How to use
25
 
@@ -41,21 +50,8 @@ sequence_output = outputs.last_hidden_state
41
 
42
  ## Training data
43
 
44
- TODO
45
-
46
- ## Training procedure
47
-
48
- ### Preprocessing
49
-
50
- TODO
51
-
52
- ### Pretraining
53
-
54
- TODO
55
-
56
- ## Evaluation results
57
 
58
- TODO
59
 
60
  ### BibTeX entry and citation info
61
 
 
9
 
10
  # CANINE-s (CANINE pre-trained with subword loss)
11
 
12
+ Pretrained CANINE model on English language using a masked language modeling (MLM) objective. It was introduced in the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) and first released in [this repository](https://github.com/google-research/language/tree/master/language/canine).
13
 
14
  Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.
15
 
16
  ## Model description
17
 
18
+ CANINE is a transformers model pretrained on a large corpus of English data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
19
+
20
+ * Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.
21
+ * Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
22
+
23
+ This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.
24
+
25
+ What's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece). Instead, it directly operates at a character level: each character is turned into its [Unicode code point](https://en.wikipedia.org/wiki/Code_point#:~:text=For%20Unicode%2C%20the%20particular%20sequence,forming%20a%20self%2Dsynchronizing%20code.).
26
 
27
  ## Intended uses & limitations
28
 
29
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=canine) to look for fine-tuned versions on a task that interests you.
30
+
31
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.
32
 
33
  ### How to use
34
 
 
50
 
51
  ## Training data
52
 
53
+ The CANINE model was pretrained on the same data as BERT, namely [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
 
 
 
 
 
 
 
 
 
 
 
 
54
 
 
55
 
56
  ### BibTeX entry and citation info
57