KeyError: 'in_ch' when loading model from pretrained.
#18
by
cfchase
- opened
When running the sample from the model card or the example_inference.py, I receive a KeyError: 'in_ch' and config appears to be {}
.
Traceback (most recent call last):
File "/home/cchase/git/huggingface/RMBG-1.4/example_inference.py", line 39, in <module>
example_inference()
File "/home/cchase/git/huggingface/RMBG-1.4/example_inference.py", line 14, in example_inference
net = BriaRMBG.from_pretrained("briaai/RMBG-1.4")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cchase/git/huggingface/RMBG-1.4/venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cchase/git/huggingface/RMBG-1.4/venv/lib/python3.12/site-packages/huggingface_hub/hub_mixin.py", line 277, in from_pretrained
instance = cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/home/cchase/git/huggingface/RMBG-1.4/venv/lib/python3.12/site-packages/huggingface_hub/hub_mixin.py", line 485, in _from_pretrained
model = cls(**model_kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cchase/git/huggingface/RMBG-1.4/briarmbg.py", line 352, in __init__
in_ch=config["in_ch"]
~~~~~~^^^^^^^^^
KeyError: 'in_ch'
$python --version
Python 3.12.2
$pip list | grep -E "torch|torchvision|pillow|numpy|typing|scikit-image|huggingface_hub"
numpy 1.26.4
pillow 10.2.0
scikit-image 0.22.0
torch 2.2.1
torchvision 0.17.1
typing 3.7.4.3
typing_extensions 4.10.0
hardcoding the in_ch
and out_ch
or manually setting the config seems to work however
in_ch=3 #config["in_ch"]
out_ch=1 #config["out_ch"]
Am I missing a step or is there some prereq I'm not doing?
Will be fixed by https://huggingface.co/briaai/RMBG-1.4/discussions/19. Thanks for the report!
Now fixed!
Works for me!
cfchase
changed discussion status to
closed