monologg commited on
Commit
9419368
1 Parent(s): ac105e2

Add logo on README

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -13,6 +13,8 @@ Pretrained BigBird Model for Korean (**kobigbird-bert-base**)
13
 
14
  ## About
15
 
 
 
16
  BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences.
17
 
18
  BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT.
13
 
14
  ## About
15
 
16
+ <img style="padding-right: 20px" src="https://user-images.githubusercontent.com/28896432/140442206-e34b02d5-e279-47e5-9c2a-db1278b1c14d.png" width="100px" align="left" />
17
+
18
  BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences.
19
 
20
  BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT.