Hugging Face's logo Hugging Face
    • Models
    • Datasets
    • Pricing
      • Website
        • Metrics
        • Languages
        • Organizations
      • Community
        • Forum
        • Blog
        • GitHub
      • Documentation
        • Model Hub doc
        • Inference API doc
        • Transformers doc
        • Tokenizers doc
        • Datasets doc

    • Log In
    • Sign Up
    • Account
      • Log In
      • Sign Up
    • Website
      • Models
      • Datasets
      • Metrics
      • Languages
      • Organizations
      • Pricing
    • Community
      • Forum
      • Blog
    • Documentation
      • Model Hub doc
      • Inference API doc
      • Transformers doc
      • Tokenizers doc
      • Datasets doc

    's picture sentence-transformers
    /
    xlm-r-100langs-bert-base-nli-stsb-mean-tokens

    PyTorch xlm-roberta
    Model card Files and versions

      How to use from the 🤗/transformers library:

                      from transformers import AutoTokenizer, AutoModel
        
        tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens")
        
        model = AutoModel.from_pretrained("sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens")
                  

      Or just clone the model repo

        git lfs install
        git clone https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens
        
      # if you want to clone without large files – just their pointers # prepend your git clone with the following env var:
      GIT_LFS_SKIP_SMUDGE=1
      • main
    sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens /
    History: 7 commits
    system
    Update pytorch_model.bin bb82e7e 6 months ago
    • .gitattributes 345.0B initial commit 6 months ago
    • config.json 541.0B Update config.json 6 months ago
    • pytorch_model.bin 1.0GB Update pytorch_model.bin 6 months ago
    • sentence_bert_config.json 27.0B Update sentence_bert_config.json 6 months ago
    • sentencepiece.bpe.model 4.8MB Update sentencepiece.bpe.model 6 months ago
    • special_tokens_map.json 150.0B Update special_tokens_map.json 6 months ago
    • tokenizer_config.json 152.0B Update tokenizer_config.json 6 months ago