Dimitre commited on
Commit
c08af86
1 Parent(s): fccee7b

Adding initial files

Browse files
Files changed (5) hide show
  1. README.md +55 -0
  2. config.json +30 -0
  3. special_tokens_map.json +1 -0
  4. spiece.model +3 -0
  5. tokenizer_config.json +1 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ datasets:
5
+ - bookcorpus
6
+ - wikipedia
7
+ - cc_news
8
+ ---
9
+
10
+ # BigBird base model
11
+
12
+ BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.
13
+
14
+ It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird).
15
+
16
+ Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.
17
+
18
+ ## Model description
19
+
20
+ BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
21
+
22
+ ## How to use `TODO: Update`
23
+
24
+ Here is how to use this model to get the features of a given text in Flax:
25
+
26
+ ```python
27
+ from transformers import BigBirdTokenizer, FlaxBigBirdModel
28
+
29
+ model_id = "flax-community/bigband"
30
+
31
+ # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
32
+ model = FlaxBigBirdModel.from_pretrained(model_id)
33
+
34
+ # you can change `attention_type` to full attention like this:
35
+ model = FlaxBigBirdModel.from_pretrained(model_id, attention_type="original_full")
36
+
37
+ # you can change `block_size` & `num_random_blocks` like this:
38
+ model = FlaxBigBirdModel.from_pretrained(model_id, block_size=16, num_random_blocks=2)
39
+
40
+ tokenizer = BigBirdTokenizer.from_pretrained(model_id)
41
+
42
+ text = "Replace me by any text you'd like."
43
+ inputs = tokenizer(text, return_tensors="jax")
44
+ output = model(**inputs)
45
+ ```
46
+
47
+ ## Training Data `TODO: Update`
48
+
49
+ This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).
50
+
51
+ ## Training Procedure `TODO: Update`
52
+
53
+ Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.
54
+
55
+ Model is warm started from RoBERTa’s checkpoint.
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BigBirdForPreTraining"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "attention_type": "block_sparse",
7
+ "block_size": 64,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu_new",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-12,
17
+ "max_position_embeddings": 4096,
18
+ "model_type": "big_bird",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "num_random_blocks": 3,
22
+ "pad_token_id": 0,
23
+ "position_embedding_type": "absolute",
24
+ "rescale_embeddings": false,
25
+ "transformers_version": "4.4.0.dev0",
26
+ "type_vocab_size": 2,
27
+ "use_bias": true,
28
+ "use_cache": true,
29
+ "vocab_size": 50358
30
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "[SEP]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "[CLS]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "[MASK]", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdc81e1fc9d42e0c08b86d5b280d05d7c5e9747c4231c648f2b56b8e1d893c82
3
+ size 845731
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "sep_token": {"content": "[SEP]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "[MASK]", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "[CLS]", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 4096, "name_or_path": "google/bigbird-roberta-large"}