Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

jinaai
/
jina-bert-flash-implementation

Transformers
bert
custom_code
๐Ÿ‡ช๐Ÿ‡บ Region: EU
Model card Files Files and versions Community
18
jina-bert-flash-implementation
Ctrl+K
Ctrl+K
  • 6 contributors
History: 77 commits
Markus28's picture
Markus28
bwang0911's picture
bwang0911
clean up embeddings.py (#7)
7771ce3 verified about 1 year ago
  • bert_padding.py
    9.78 kB
    reference the flash attention GitHub about 1 year ago
  • block.py
    17.4 kB
    reference the flash attention GitHub about 1 year ago
  • configuration_bert.py
    5.76 kB
    added classifier dropout about 1 year ago
  • embedding.py
    2.26 kB
    clean up embeddings.py (#7) about 1 year ago
  • mha.py
    35.3 kB
    reference the flash attention GitHub about 1 year ago
  • mlp.py
    6.17 kB
    reference the flash attention GitHub about 1 year ago
  • modeling_bert.py
    28.7 kB
    feat: choose flash attention heuristically if not set explicitly about 1 year ago
  • modeling_for_glue.py
    10.7 kB
    feat: assert return_dict about 1 year ago
  • modeling_lora.py
    8.31 kB
    feat: return from_bert for from_pretrained about 1 year ago
  • tokenizer.py
    3.95 kB
    support-fast-tokenizer (#6) about 1 year ago