Hugging Face's logo Hugging Face
    • Models
    • Datasets
    • Pricing
      • Website
        • Metrics
        • Languages
        • Organizations
      • Community
        • Forum
        • Blog
        • GitHub
      • Documentation
        • Model Hub doc
        • Inference API doc
        • Transformers doc
        • Tokenizers doc
        • Datasets doc

    • Log In
    • Sign Up
    • Account
      • Log In
      • Sign Up
    • Website
      • Models
      • Datasets
      • Metrics
      • Languages
      • Organizations
      • Pricing
    • Community
      • Forum
      • Blog
    • Documentation
      • Model Hub doc
      • Inference API doc
      • Transformers doc
      • Tokenizers doc
      • Datasets doc

    's picture Hate-speech-CNERG
    /
    dehatebert-mono-german

    Text Classification
    PyTorch arxiv:2004.06465 bert
    Model card Files and versions

      How to use from the 🤗/transformers library:

                      from transformers import AutoTokenizer, AutoModelForSequenceClassification
        
        tokenizer = AutoTokenizer.from_pretrained("Hate-speech-CNERG/dehatebert-mono-german")
        
        model = AutoModelForSequenceClassification.from_pretrained("Hate-speech-CNERG/dehatebert-mono-german")
                  

      Or just clone the model repo

        git lfs install
        git clone https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-german
        
      # if you want to clone without large files – just their pointers # prepend your git clone with the following env var:
      GIT_LFS_SKIP_SMUDGE=1
      • main
    Hate-speech-CNERG/dehatebert-mono-german /
    History: 8 commits
    julien-c's picture
    julien-c
    Migrate model card from transformers-repo 0cf93ad 2 months ago
    • .gitattributes 345.0B initial commit 8 months ago
    • README.md 1016.0B Migrate model card from transformers-repo 2 months ago
    • config.json 1.2KB Update config.json 7 months ago
    • pytorch_model.bin 638.5MB Update pytorch_model.bin 8 months ago
    • special_tokens_map.json 112.0B Update special_tokens_map.json 8 months ago
    • tokenizer_config.json 152.0B Update tokenizer_config.json 8 months ago
    • vocab.txt 851.5KB Update vocab.txt 8 months ago