how to run kiri-ai/gpt2-large-quantized ?

#2
by apivovarov - opened

The Doc says

# Import generic wrappers
from transformers import AutoModel, AutoTokenizer 

# Define the model repo
model_name = "kiri-ai/gpt2-large-quantized" 

# Download pytorch model
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Transform input tokens 
inputs = tokenizer("Hello world!", return_tensors="pt")

# Model apply
outputs = model(**inputs)

However it fails.

Errors:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 2007, in from_pretrained
    resolved_archive_file = cached_path(
  File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 284, in cached_path
    output_path = get_from_cache(
  File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 495, in get_from_cache
    _raise_for_status(r)
  File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 411, in _raise_for_status
    raise EntryNotFoundError(f"404 Client Error: Entry Not Found for url: {response.url}")
transformers.utils.hub.EntryNotFoundError: 404 Client Error: Entry Not Found for url: https://huggingface.co/kiri-ai/gpt2-large-quantized/resolve/main/pytorch_model.bin

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 2041, in from_pretrained
    resolved_archive_file = cached_path(
  File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 284, in cached_path
    output_path = get_from_cache(
  File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 495, in get_from_cache
    _raise_for_status(r)
  File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 411, in _raise_for_status
    raise EntryNotFoundError(f"404 Client Error: Entry Not Found for url: {response.url}")
transformers.utils.hub.EntryNotFoundError: 404 Client Error: Entry Not Found for url: https://huggingface.co/kiri-ai/gpt2-large-quantized/resolve/main/pytorch_model.bin.index.json

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
    return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 2074, in from_pretrained
    raise EnvironmentError(
OSError: kiri-ai/gpt2-large-quantized does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

It still fails, unfortunately, either on the site or locally:

  1. Web: "his model can be loaded on Inference API (serverless).
    Could not load model kiri-ai/gpt2-large-quantized with any of the following classes: (<class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>, <class 'transformers.models.gpt2.modeling_tf_gpt2.TFGPT2LMHeadModel'>). See the original errors: while loading with GPT2LMHeadModel, an error is thrown: Traceback (most recent call last): File "/src/transformers/src/transformers/pipelines/base.py", line 279, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/transformers/src/transformers/modeling_utils.py", line 3236, in from_pretrained raise EnvironmentError( OSError: kiri-ai/gpt2-large-quantized does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack. while loading with TFGPT2LMHeadModel, an error is thrown: Traceback (most recent call last): File "/src/transformers/src/transformers/pipelines/base.py", line 279, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/transformers/src/transformers/modeling_tf_utils.py", line 2805, in from_pretrained raise EnvironmentError( OSError: kiri-ai/gpt2-large-quantized does not appear to have a file named pytorch_model.bin, tf_model.h5 or model.ckpt"

  2. ... return super().find_class(mod_name, name)
    AttributeError: Can't get attribute 'Block' on <module 'transformers.models.gpt2.modeling_gpt2' from 'C:\Users\....\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py'>

Sign up or log in to comment