๐ซ Official implementation of Charm: The Missing Piece in ViT fine-tuning for Image Aesthetic Assessment
We introduce Charm , a novel tokenization approach that preserves Composition, High-resolution, Aspect Ratio, and Multi-scale information simultaneously. By preserving critical information, Charm works like a charm for image aesthetic and quality assessment ๐.
Quick Inference
- Step 1) Check our GitHub Page and install the requirements.
pip install -r requirements.txt
- Step 2) Install Charm tokenizer.
pip install Charm-tokenizer
- Step 3) Tokenization + Position embedding preparation
Charm Tokenizer has the following input args:
- patch_selection (str): The method for selecting important patches
- Options: 'saliency', 'random', 'frequency', 'gradient', 'entropy', 'original'.
- training_dataset (str): Used to set the number of ViT input tokens to match a specific training dataset from the paper.
- Aesthetic assessment datasets: 'aadb', 'tad66k', 'para', 'baid'.
- Quality assessment datasets: 'spaq', 'koniq10k'.
- backbone (str): The ViT backbone model (default: 'facebook/dinov2-small').
- factor (float): The downscaling factor for less important patches (default: 0.5).
- scales (int): The number of scales used for multiscale processing (default: 2).
- random_crop_size (tuple): Used for the 'original' patch selection strategy (default: (224, 224)).
- downscale_shortest_edge (int): Used for the 'original' patch selection strategy (default: 256).
- without_pad_or_dropping (bool): Whether to avoid padding or dropping patches (default: True).
Note: While random patch selection during training helps avoid overfitting,for consistent results during inference, fully deterministic patch selection approaches should be used.
The output is the preprocessed tokens, their corresponding positional embeddings, and a mask token that indicates which patches are in high resolution and which are in low resolution.
from Charm_tokenizer.ImageProcessor import Charm_Tokenizer
img_path = r"img.png"
charm_tokenizer = Charm_Tokenizer(patch_selection='frequency', training_dataset='tad66k', without_pad_or_dropping=True)
tokens, pos_embed, mask_token = charm_tokenizer.preprocess(img_path)
Step 4) Predicting aesthetic/quality score
If training_dataset is set to 'spaq' or 'koniq10k', the model predicts the image quality score. For other options ('aadb', 'tad66k', 'para', 'baid'), it predicts the image aesthetic score.
Selecting a dataset with image resolutions similar to your input images can improve prediction accuracy.
For more details about the process, please refer to the paper.
from Charm_tokenizer.Backbone import backbone
model = backbone(training_dataset='tad66k', device='cpu')
prediction = model.predict(tokens, pos_embed, mask_token)
Note: For the training code, check our GitHub Page.