model_type not specified in config.json

#1
by anant718 - opened

config.json is missing key-value pair "model_type": "bert" which gives the following error:

  File "...train.py", line 124, in main
    model = AutoModelForTokenClassification.from_pretrained(
  File "/job/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 424, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/job/.local/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 665, in from_pretrained
    raise ValueError(
ValueError: Unrecognized model in model. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: maskformer, poolformer, convnext, yoso, swin, vilt, vit_mae, realm, nystromformer, xglm, imagegpt, qdqbert, vision-encoder-decoder, trocr, fnet, segformer, vision-text-dual-encoder, perceiver, gptj, layoutlmv2, plbart, beit, rembert, visual_bert, canine, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text_2, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta-xl, xlm-roberta, pegasus, marian, mbart, megatron-bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, speech-encoder-decoder, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas, splinter, sew-d, sew, unispeech-sat, unispeech, wavlm, data2vec-audio, data2vec-text

Adding the missing key-value pair solves the issue.

Sign up or log in to comment