Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

jinaai
/
xlm-roberta-flash-implementation

Transformers
xlm-roberta
๐Ÿ‡ช๐Ÿ‡บ Region: EU
Model card Files Files and versions Community
57
xlm-roberta-flash-implementation
Ctrl+K
Ctrl+K
  • 10 contributors
History: 51 commits
jupyterjazz's picture
jupyterjazz
fix: partition adapter mask when batch size is specified
e6e3a6f verified 8 months ago
  • .gitattributes
    1.52 kB
    initial commit about 1 year ago
  • README.md
    1.33 kB
    feat: update the readme 8 months ago
  • block.py
    17.8 kB
    refine-codebase (#33) 9 months ago
  • configuration_xlm_roberta.py
    6.43 kB
    lora-instructions (#36) 9 months ago
  • convert_roberta_weights_to_flash.py
    6.94 kB
    Support for SequenceClassification (#7) about 1 year ago
  • embedding.py
    3.88 kB
    refine-codebase (#33) 9 months ago
  • mha.py
    34.4 kB
    cpu-inference (#35) 9 months ago
  • mlp.py
    7.62 kB
    refine-codebase (#33) 9 months ago
  • modeling_lora.py
    14.8 kB
    lora-instructions (#36) 9 months ago
  • modeling_xlm_roberta.py
    50 kB
    fix: partition adapter mask when batch size is specified 8 months ago
  • rotary.py
    24.5 kB
    fix: update frequencies when updating the rope base value (#40) 8 months ago
  • stochastic_depth.py
    3.76 kB
    refine-codebase (#33) 9 months ago
  • xlm_padding.py
    10 kB
    refine-codebase (#33) 9 months ago