Does the model fail to answer the example inputs correctly?

#2
by onurgu - opened

Is it me or the currently deployed model is failing the input sentence:

Paris is the [MASK] of France.

for the Fill Mask task. I also tried the other one, it also fails.

  {
    "score": 0.0004416577285155654,
    "token": 19221,
    "token_str": "yll",
    "sequence": "Paris is theyll of France."
  },
  {
    "score": 0.0003129251708742231,
    "token": 2347,
    "token_str": " shots",
    "sequence": "Paris is the shots of France."
  },
  {
    "score": 0.00027939782012254,
    "token": 20427,
    "token_str": " Styles",
    "sequence": "Paris is the Styles of France."
  },
  {
    "score": 0.0002659567689988762,
    "token": 17150,
    "token_str": "ells",
    "sequence": "Paris is theells of France."
  },
  {
    "score": 0.00025949286646209657,
    "token": 14345,
    "token_str": "realDonaldTrump",
    "sequence": "Paris is therealDonaldTrump of France."
  }
]
Microsoft org

Hi,

This is a known issue, see here for more info: https://github.com/huggingface/transformers/pull/18674

Sign up or log in to comment