adversarial_pcam / README.md
Venkata Pydipalli
Changes to the dataset structure according to huggingface.
a48c62f
metadata
tags:
  - adversarial
  - image-classification
  - robustness
  - deep-learning
  - computer-vision
task_categories:
  - image-classification
model:
  - lens-ai/clip-vit-base-patch32_pcam_finetuned

Adversarial PCAM Dataset

This dataset contains adversarial examples generated using various attack techniques on PatchCamelyon (PCAM) images. The adversarial images were crafted to fool the fine-tuned model:
lens-ai/clip-vit-base-patch32_pcam_finetuned.

Researchers and engineers can use this dataset to:

  • Evaluate model robustness against adversarial attacks
  • Train models with adversarial data for improved resilience
  • Benchmark new adversarial defense mechanisms

πŸ“‚ Dataset Structure

organized_dataset/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ 0/  # Negative samples (adversarial images only)
β”‚   β”‚   └── adv_0_labelfalse_pred1_SquareAttack.png
β”‚   └── 1/  # Positive samples (adversarial images only)
β”‚       └── adv_1_labeltrue_pred0_SquareAttack.png
β”œβ”€β”€ originals/  # Original images
β”‚   β”œβ”€β”€ orig_0_labelfalse_SquareAttack.png
β”‚   └── orig_1_labeltrue_SquareAttack.png
β”œβ”€β”€ perturbations/  # Perturbation masks
β”‚   β”œβ”€β”€ perturbation_0_SquareAttack.png
β”‚   └── perturbation_1_SquareAttack.png
└── dataset.json

Each adversarial example consists of:

  • train/{0,1}/adv_{id}_label{true/false}_pred{pred_label}_{attack_name}.png β†’ Adversarial image with model prediction
  • originals/orig_{id}_label{true/false}_{attack_name}.png β†’ Original image before perturbation
  • perturbations/perturbation_{id}_{attack_name}.png β†’ The perturbation applied to the original image
  • Attack name in filename indicates which method was used

The dataset.json file contains detailed metadata for each sample, including:

{
    "attack": "SquareAttack",
    "type": "black_box_attacks",
    "perturbation": "perturbations/perturbation_1_SquareAttack.png",
    "adversarial": "train/0/adv_1_labelfalse_pred1_SquareAttack.png",
    "original": "originals/orig_1_labelfalse_SquareAttack.png",
    "label": 0,
    "prediction": 1
}

πŸ”Ή Attack Types

The dataset contains both black-box and non-black-box adversarial attacks.

1️⃣ Black-Box Attacks

These attacks do not require access to model gradients:

πŸ”Ή HopSkipJump Attack

  • Query-efficient black-box attack that estimates gradients
  • Based on decision boundary approximation

πŸ”Ή Zoo Attack

  • Zeroth-order optimization (ZOO) attack
  • Estimates gradients via finite-difference methods

2️⃣ Non-Black-Box Attacks

These attacks require access to model gradients:

πŸ”Ή SimBA (Simple Black-box Attack)

  • Uses random perturbations to mislead the model
  • Reduces query complexity

πŸ”Ή Boundary Attack

  • Query-efficient attack moving along decision boundary
  • Minimizes perturbation size

πŸ”Ή Spatial Transformation Attack

  • Uses rotation, scaling, and translation
  • No pixel-level perturbations required

Usage

import json
import torch
from torchvision import transforms
from PIL import Image
from pathlib import Path

# Load the dataset information
with open('organized_dataset/dataset.json', 'r') as f:
    dataset_info = json.load(f)["train"]["rows"]  # Access the rows in train split

# Define transformation
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor()
])

# Function to load and process images
def load_image(image_path):
    img = Image.open(image_path).convert("RGB")
    return transform(img)

# Example: Loading a set of related images (original, adversarial, and perturbation)
for entry in dataset_info:
    # Load adversarial image
    adv_path = Path('organized_dataset') / entry['image_path']
    adv_image = load_image(adv_path)
    
    # Load original image
    orig_path = Path('organized_dataset') / entry['original_path']
    orig_image = load_image(orig_path)
    
    # Load perturbation if available
    if entry['perturbation_path']:
        pert_path = Path('organized_dataset') / entry['perturbation_path']
        pert_image = load_image(pert_path)
    
    # Access metadata
    attack_type = entry['attack']
    label = entry['label']
    prediction = entry['prediction']
    
    print(f"Attack: {attack_type}")
    print(f"True Label: {label}")
    print(f"Model Prediction: {prediction}")
    print(f"Image shapes: {adv_image.shape}")  # Should be (3, 224, 224)

πŸ“Š Attack Success Rates

Success rates for each attack on the target model:

{
    "HopSkipJump": {"success_rate": 14},
    "Zoo_Attack": {"success_rate": 22},
    "SimBA": {"success_rate": 99},
    "Boundary_Attack": {"success_rate": 98},
    "SpatialTransformation_Attack": {"success_rate": 99}
}

Citation

@article{lensai2025adversarial,
  title={Adversarial PCAM Dataset},
  author={LensAI Team},
  year={2025},
  url={https://huggingface.co/datasets/lens-ai/adversarial_pcam}
}