What transformers version should I use to load this model?

#1
by apivovarov - opened

Error:

>>> model = torch.load("pytorch_model_quantized.bin")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 789, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1131, in _load
    result = unpickler.load()
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1124, in find_class
    return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'Block' on <module 'transformers.models.gpt2.modeling_gpt2' from '/usr/local/lib/python3.8/dist-packages/transformers/models/gpt2/modeling_gpt2.py'>

Sign up or log in to comment