|
--- |
|
tags: |
|
- adversarial |
|
- image-classification |
|
- robustness |
|
- deep-learning |
|
- computer-vision |
|
task_categories: |
|
- image-classification |
|
model: |
|
- lens-ai/clip-vit-base-patch32_pcam_finetuned |
|
--- |
|
|
|
# **Adversarial PCAM Dataset** |
|
This dataset contains adversarial examples generated using various attack techniques on **PatchCamelyon (PCAM)** images. The adversarial images were crafted to fool the fine-tuned model: |
|
**[lens-ai/clip-vit-base-patch32_pcam_finetuned](https://huggingface.co/lens-ai/clip-vit-base-patch32_pcam_finetuned)**. |
|
|
|
Researchers and engineers can use this dataset to: |
|
- Evaluate model robustness against adversarial attacks |
|
- Train models with adversarial data for improved resilience |
|
- Benchmark new adversarial defense mechanisms |
|
|
|
--- |
|
|
|
## **π Dataset Structure** |
|
``` |
|
organized_dataset/ |
|
βββ train/ |
|
β βββ 0/ # Negative samples (adversarial images only) |
|
β β βββ adv_0_labelfalse_pred1_SquareAttack.png |
|
β βββ 1/ # Positive samples (adversarial images only) |
|
β βββ adv_1_labeltrue_pred0_SquareAttack.png |
|
βββ originals/ # Original images |
|
β βββ orig_0_labelfalse_SquareAttack.png |
|
β βββ orig_1_labeltrue_SquareAttack.png |
|
βββ perturbations/ # Perturbation masks |
|
β βββ perturbation_0_SquareAttack.png |
|
β βββ perturbation_1_SquareAttack.png |
|
βββ dataset.json |
|
``` |
|
|
|
Each adversarial example consists of: |
|
- `train/{0,1}/adv_{id}_label{true/false}_pred{pred_label}_{attack_name}.png` β **Adversarial image** with model prediction |
|
- `originals/orig_{id}_label{true/false}_{attack_name}.png` β **Original image** before perturbation |
|
- `perturbations/perturbation_{id}_{attack_name}.png` β **The perturbation applied** to the original image |
|
- **Attack name in filename** indicates which method was used |
|
|
|
The `dataset.json` file contains detailed metadata for each sample, including: |
|
```json |
|
{ |
|
"attack": "SquareAttack", |
|
"type": "black_box_attacks", |
|
"perturbation": "perturbations/perturbation_1_SquareAttack.png", |
|
"adversarial": "train/0/adv_1_labelfalse_pred1_SquareAttack.png", |
|
"original": "originals/orig_1_labelfalse_SquareAttack.png", |
|
"label": 0, |
|
"prediction": 1 |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## **πΉ Attack Types** |
|
The dataset contains both black-box and non-black-box adversarial attacks. |
|
|
|
### **1οΈβ£ Black-Box Attacks** |
|
These attacks do not require access to model gradients: |
|
|
|
#### **πΉ HopSkipJump Attack** |
|
- Query-efficient black-box attack that estimates gradients |
|
- Based on decision boundary approximation |
|
|
|
#### **πΉ Zoo Attack** |
|
- Zeroth-order optimization (ZOO) attack |
|
- Estimates gradients via finite-difference methods |
|
|
|
### **2οΈβ£ Non-Black-Box Attacks** |
|
These attacks require access to model gradients: |
|
|
|
#### **πΉ SimBA (Simple Black-box Attack)** |
|
- Uses random perturbations to mislead the model |
|
- Reduces query complexity |
|
|
|
#### **πΉ Boundary Attack** |
|
- Query-efficient attack moving along decision boundary |
|
- Minimizes perturbation size |
|
|
|
#### **πΉ Spatial Transformation Attack** |
|
- Uses rotation, scaling, and translation |
|
- No pixel-level perturbations required |
|
|
|
--- |
|
|
|
## Usage |
|
|
|
```python |
|
import json |
|
import torch |
|
from torchvision import transforms |
|
from PIL import Image |
|
from pathlib import Path |
|
|
|
# Load the dataset information |
|
with open('organized_dataset/dataset.json', 'r') as f: |
|
dataset_info = json.load(f)["train"]["rows"] # Access the rows in train split |
|
|
|
# Define transformation |
|
transform = transforms.Compose([ |
|
transforms.Resize((224, 224)), |
|
transforms.ToTensor() |
|
]) |
|
|
|
# Function to load and process images |
|
def load_image(image_path): |
|
img = Image.open(image_path).convert("RGB") |
|
return transform(img) |
|
|
|
# Example: Loading a set of related images (original, adversarial, and perturbation) |
|
for entry in dataset_info: |
|
# Load adversarial image |
|
adv_path = Path('organized_dataset') / entry['image_path'] |
|
adv_image = load_image(adv_path) |
|
|
|
# Load original image |
|
orig_path = Path('organized_dataset') / entry['original_path'] |
|
orig_image = load_image(orig_path) |
|
|
|
# Load perturbation if available |
|
if entry['perturbation_path']: |
|
pert_path = Path('organized_dataset') / entry['perturbation_path'] |
|
pert_image = load_image(pert_path) |
|
|
|
# Access metadata |
|
attack_type = entry['attack'] |
|
label = entry['label'] |
|
prediction = entry['prediction'] |
|
|
|
print(f"Attack: {attack_type}") |
|
print(f"True Label: {label}") |
|
print(f"Model Prediction: {prediction}") |
|
print(f"Image shapes: {adv_image.shape}") # Should be (3, 224, 224) |
|
``` |
|
|
|
## **π Attack Success Rates** |
|
Success rates for each attack on the target model: |
|
```json |
|
{ |
|
"HopSkipJump": {"success_rate": 14}, |
|
"Zoo_Attack": {"success_rate": 22}, |
|
"SimBA": {"success_rate": 99}, |
|
"Boundary_Attack": {"success_rate": 98}, |
|
"SpatialTransformation_Attack": {"success_rate": 99} |
|
} |
|
``` |
|
|
|
## Citation |
|
```bibtex |
|
@article{lensai2025adversarial, |
|
title={Adversarial PCAM Dataset}, |
|
author={LensAI Team}, |
|
year={2025}, |
|
url={https://huggingface.co/datasets/lens-ai/adversarial_pcam} |
|
} |
|
``` |
|
|