An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks.
This is an image processor mixin used to provide saving/loading functionality for sequential and image feature extractors.
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
bert-base-uncased
, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../my_model_directory/preprocessor_config.json
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model image processor should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the image processor files and override the cached versions if
they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received file. Attempts to resume the download if such a file
exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
str
or bool
, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, or not specified, will use
the token generated when running huggingface-cli login
(stored in ~/.huggingface
).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
Instantiate a type of ImageProcessingMixin from an image processor.
Examples:
# We can't instantiate directly the base class *ImageProcessingMixin* so let's show the examples on a
# derived class: *CLIPImageProcessor*
image_processor = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32"
) # Download image_processing_config from huggingface.co and cache.
image_processor = CLIPImageProcessor.from_pretrained(
"./test/saved_model/"
) # E.g. image processor (or model) was saved using *save_pretrained('./test/saved_model/')*
image_processor = CLIPImageProcessor.from_pretrained("./test/saved_model/preprocessor_config.json")
image_processor = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32", do_normalize=False, foo=False
)
assert image_processor.do_normalize is False
image_processor, unused_kwargs = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32", do_normalize=False, foo=False, return_unused_kwargs=True
)
assert image_processor.do_normalize is False
assert unused_kwargs == {"foo": False}
( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs )
Parameters
str
or os.PathLike
) —
Directory where the image processor JSON file will be saved (will be created if it does not exist).
bool
, optional, defaults to False
) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id
(will default to the name of save_directory
in your
namespace).
kwargs —
Additional key word arguments passed along to the push_to_hub() method.
Save an image processor object to the directory save_directory
, so that it can be re-loaded using the
from_pretrained() class method.
( data: typing.Union[typing.Dict[str, typing.Any], NoneType] = None tensor_type: typing.Union[NoneType, str, transformers.utils.generic.TensorType] = None )
Parameters
dict
) —
Dictionary of lists/arrays/tensors returned by the call/pad methods (‘input_values’, ‘attention_mask’,
etc.).
Union[None, str, TensorType]
, optional) —
You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at
initialization.
Holds the output of the pad() and feature extractor specific __call__
methods.
This class is derived from a python dictionary and can be used as a dictionary.
( tensor_type: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None )
Parameters
str
or TensorType, optional) —
The type of tensors to use. If str
, should be one of the values of the enum TensorType. If
None
, no modification is done.
Convert the inner content to tensors.
( *args **kwargs ) → BatchFeature
Parameters
Tuple
) —
Will be passed to the to(...)
function of the tensors.
Dict
, optional) —
Will be passed to the to(...)
function of the tensors.
Returns
The same instance after modification.
Send all values to device by calling v.to(*args, **kwargs)
(PyTorch only). This should support casting in
different dtypes
and sending the BatchFeature
to a different device
.