Hugging Face's logo Hugging Face
    • Models
    • Datasets
    • Pricing
      • Website
        • Metrics
        • Languages
        • Organizations
      • Community
        • Forum
        • Blog
        • GitHub
      • Documentation
        • Model Hub doc
        • Inference API doc
        • Transformers doc
        • Tokenizers doc
        • Datasets doc
    • We're hiring!

    • Log In
    • Sign Up
    • Account
      • Log In
      • Sign Up
    • Website
      • Models
      • Datasets
      • Metrics
      • Languages
      • Organizations
      • Pricing
    • Community
      • Forum
      • Blog
    • Documentation
      • Model Hub doc
      • Inference API doc
      • Transformers doc
      • Tokenizers doc
      • Datasets doc

    Indonesian NLP's picture indonesian-nlp
    /
    wav2vec2-large-xlsr-indonesian

    Automatic Speech Recognition
    PyTorch common_voice id apache-2.0 wav2vec2 audio speech xlsr-fine-tuning-week
    Model card Files and versions

    How to use from the 🤗/transformers library

    from transformers import AutoTokenizer, Wav2Vec2ForCTC
      
    tokenizer = AutoTokenizer.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
    
    model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")

    Or just clone the model repo

    git lfs install
    git clone https://huggingface.co/indonesian-nlp/wav2vec2-large-xlsr-indonesian
    # if you want to clone without large files – just their pointers
    # prepend your git clone with the following env var:
    GIT_LFS_SKIP_SMUDGE=1
      • main
      wav2vec2-large-xlsr-indonesian
      History: 13 commits
      Cahya Wirawan
      updated the link to synthetic file c0a9750 6 days ago
      • .gitattributes 690.0B initial commit last month
      • README.md 4.0KB updated the link to synthetic file 6 days ago
      • config.json 1.6KB updated the model and its readme file 6 days ago
      • preprocessor_config.json 158.0B added first commit last month
      • pytorch_model.bin 1.2GB updated the model and its readme file 6 days ago
      • special_tokens_map.json 85.0B added first commit last month
      • tokenizer_config.json 138.0B added first commit last month
      • vocab.json 250.0B updated the model and its readme file 6 days ago