해당 모델을 사용할려는데 오류가 납니다.

#1
by sanginoh - opened

안녕하세요? 아래와 같이 코드를 이용해서 사용해볼려고 하는데

input_ids = tf.keras.Input(shape=(MAX_TOKEN_LENGTH,), dtype=tf.int32)
token_type_ids = tf.keras.Input(shape=(MAX_TOKEN_LENGTH,), dtype=tf.int32)
attention_mask = tf.keras.Input(shape=(MAX_TOKEN_LENGTH,), dtype=tf.int32)

mdeberta_model = TFAutoModel.from_pretrained("lighthouse/mdeberta-v3-base-kor-further", from_pt=True, output_hidden_states=True)
x = mdeberta_model({"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask}, training=False)
outputs = x["pooler_output"]

model = tf.keras.Model([input_ids, token_type_ids, attention_mask], outputs)

ValueError: Tried to convert 'shape' to a tensor and failed. Error: Cannot convert a partially known TensorShape (None, 512, 512) to a Tensor.

                Call arguments received by layer "self" (type TFDebertaV2DisentangledSelfAttention):
                  • hidden_states=tf.Tensor(shape=(None, 512, 768), dtype=float32)
                  • attention_mask=tf.Tensor(shape=(None, 1, 512, 512), dtype=uint8)
                  • query_states=None
                  • relative_pos=tf.Tensor(shape=(1, 512, 512), dtype=int64)
                  • rel_embeddings=tf.Tensor(shape=(512, 768), dtype=float32)
                  • output_attentions=False
                  • training=False
            
            
            Call arguments received by layer "attention" (type TFDebertaV2Attention):
              • input_tensor=tf.Tensor(shape=(None, 512, 768), dtype=float32)
              • attention_mask=tf.Tensor(shape=(None, 1, 512, 512), dtype=uint8)
              • query_states=None
              • relative_pos=tf.Tensor(shape=(1, 512, 512), dtype=int64)
              • rel_embeddings=tf.Tensor(shape=(512, 768), dtype=float32)
              • output_attentions=False
              • training=False
        
        
        Call arguments received by layer "layer_._0" (type TFDebertaV2Layer):
          • hidden_states=tf.Tensor(shape=(None, 512, 768), dtype=float32)
          • attention_mask=tf.Tensor(shape=(None, 1, 512, 512), dtype=uint8)
          • query_states=None
          • relative_pos=tf.Tensor(shape=(1, 512, 512), dtype=int64)
          • rel_embeddings=tf.Tensor(shape=(512, 768), dtype=float32)
          • output_attentions=False
          • training=False
    
    
    Call arguments received by layer "encoder" (type TFDebertaV2Encoder):
      • hidden_states=tf.Tensor(shape=(None, 512, 768), dtype=float32)
      • attention_mask=tf.Tensor(shape=(None, 512), dtype=int32)
      • query_states=None
      • relative_pos=None
      • output_attentions=False
      • output_hidden_states=True
      • return_dict=True
      • training=False


Call arguments received by layer "deberta" (type TFDebertaV2MainLayer):
  • self=tf.Tensor(shape=(None, 512), dtype=int32)
  • input_ids=None
  • attention_mask=tf.Tensor(shape=(None, 512), dtype=int32)
  • token_type_ids=tf.Tensor(shape=(None, 512), dtype=int32)
  • position_ids=None
  • inputs_embeds=None
  • output_attentions=False
  • output_hidden_states=True
  • return_dict=True
  • training=False

위와 같이 에러가 납니다. 다른 모델들은 오류가 없는데.. input_ids = None으로 값이 들어가서 나는 오류로 보이는데 혹시 PyTorch 코드 예제나 원인을 알수 없는지 궁금합니다.

KPMG Lighthouse KR org

저희가 tensorflow로는 테스트 해보진 않아서 안될수도 있습니다만,
기존 hugging face 그대로 하시면 됩니다. 혹시 버전이 몇인가요?

torch 예제는 여깄습니다.
tokenized = tokenizer(text=text)
inputs = {key: torch.tensor(examples[key]).to(self.device)
for key in examples.keys()}
model(**inputs)

Sign up or log in to comment