Some weights of LayoutLMForSequenceClassification were not initialized from the model checkpoint at microsoft/layoutlm-base-uncased and are newly initialized

#4
by alexneakameni - opened

Hello,

In the version 4.38.1 of transformers, I have this issue while loading the model :
Some weights of LayoutLMForSequenceClassification were not initialized from the model checkpoint at microsoft/layoutlm-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'layoutlm.embeddings.word_embeddings.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Model structure :

LayoutLMEmbeddings(
  (word_embeddings): Embedding(30522, 768, padding_idx=0)
  (position_embeddings): Embedding(512, 768)
  (x_position_embeddings): Embedding(1024, 768)
  (y_position_embeddings): Embedding(1024, 768)
  (h_position_embeddings): Embedding(1024, 768)
  (w_position_embeddings): Embedding(1024, 768)
  (token_type_embeddings): Embedding(2, 768)
  (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
  (dropout): Dropout(p=0.1, inplace=False)
)

Code to reduce issue :

from transformers import LayoutLMForSequenceClassification

model = LayoutLMForSequenceClassification.from_pretrained("microsoft/layoutlm-base-uncased")

It seems to be solved by avoiding safe tensors : model = LayoutLMForSequenceClassification.from_pretrained("microsoft/layoutlm-base-uncased", use_safetensors=False)

Sign up or log in to comment