Some warnings

#2
by Superchik - opened

from transformers import AutoTokenizer
from transformers import GenerationConfig

model_answer_name = "IlyaGusev/fred_t5_ru_turbo_alpaca"
generation_config_answer = GenerationConfig.from_pretrained(model_answer_name)

UserWarning: do_sample is set to False. However, top_p is set to 0.9 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset top_p. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.

warnings.warn(

tokenizer_answer = AutoTokenizer.from_pretrained(model_answer_name)

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

Console output:

...\venv\lib\site-packages\transformers\generation\configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
  warnings.warn(
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Superchik changed discussion status to closed

Sign up or log in to comment