Hugging Face's logo Hugging Face
    • Models
    • Datasets
    • Pricing
      • Website
        • Metrics
        • Languages
        • Organizations
      • Community
        • Forum
        • Blog
        • GitHub
      • Documentation
        • Model Hub doc
        • Inference API doc
        • Transformers doc
        • Tokenizers doc
        • Datasets doc

    • Log In
    • Sign Up
    • Account
      • Log In
      • Sign Up
    • Website
      • Models
      • Datasets
      • Metrics
      • Languages
      • Organizations
      • Pricing
    • Community
      • Forum
      • Blog
    • Documentation
      • Model Hub doc
      • Inference API doc
      • Transformers doc
      • Tokenizers doc
      • Datasets doc

    's picture CLASSLA
    /
    bcms-bertic-ner

    Token Classification
    PyTorch hr bs sr cnr hbs apache-2.0 electra
    Model card Files and versions

      How to use from the 🤗/transformers library:

                      from transformers import AutoTokenizer, AutoModelForTokenClassification
        
        tokenizer = AutoTokenizer.from_pretrained("CLASSLA/bcms-bertic-ner")
        
        model = AutoModelForTokenClassification.from_pretrained("CLASSLA/bcms-bertic-ner")
                  

      Or just clone the model repo

        git lfs install
        git clone https://huggingface.co/CLASSLA/bcms-bertic-ner
        
      # if you want to clone without large files – just their pointers # prepend your git clone with the following env var:
      GIT_LFS_SKIP_SMUDGE=1
      • main
    CLASSLA/bcms-bertic-ner /
    History: 14 commits
    nljubesi's picture
    nljubesi
    Update README.md 9e2922f 12 days ago
    • .gitattributes 690.0B initial commit 26 days ago
    • README.md 2.2KB Update README.md 12 days ago
    • config.json 1018.0B New model without the person derivative class 25 days ago
    • pytorch_model.bin 419.8MB New model without the person derivative class 25 days ago
    • special_tokens_map.json 112.0B Model added 26 days ago
    • tokenizer_config.json 352.0B Update tokenizer_config.json 25 days ago
    • vocab.txt 225.1KB Model added 26 days ago