Skip to content

This repository contains the implementation code for the paper: TextureCrop: Enhancing Synthetic Image Detection through Texture-based Cropping.

Notifications You must be signed in to change notification settings

mever-team/texture-crop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Paper

This repository contains the implementation code for the paper:

TextureCrop: Enhancing Synthetic Image Detection through Texture-based Cropping (available at arXiv:2402.19091) Despina Konstantinidou, Christos Koutlis, Symeon Papadopoulos

Overview

Generative AI technologies produce increasingly realistic imagery, which, despite its potential for creative applications, can also be misused to produce misleading and harmful content. This renders Synthetic Image Detection (SID) methods essential for identifying AI-generated content online. State-of-the-art SID methods typically resize or center-crop input images due to architectural or computational constraints, which hampers the detection of artifacts that appear in high-resolution images. To address this limitation, we propose TextureCrop, an image pre-processing component that can be plugged in any pre-trained SID model to improve its performance. By focusing on high-frequency image parts where generative artifacts are prevalent, TextureCrop enhances SID performance with manageable memory requirements. Experimental results demonstrate a consistent improvement in AUC across various detectors by 6.1% compared to center cropping and by 15% compared to resizing, across high-resolution images from the Forensynths, Synthbuster and TWIGMA datasets.

Figure 1. Overview of the TextureCrop Pipeline.

Setup

Clone the repository:

git clone https://github.com/mever-team/texture-crop

Create the environment:

conda create -n sid python=3.11
conda activate sid
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

pip install -r requirements.txt

Store the datasets in datasets/:

The data/ directory should look like:

data
└── forensynths
└── synthbuster
    ├── raise	
└── twigma
└── openimagesdataset

Models

The following models have been evalauated. Note that for some of these models, there are multiple pretrained instances.

Model Name Parameters Paper Title Original Code
GramNet - Global Texture Enhancement for Fake Face Detection In the Wild 🔗
CNNDetect 0.1 and 0.5 CNN-generated images are surprisingly easy to spot...for now 🔗
GANID ProGAN and StyleGAN2 On the detection of synthetic images generated by diffusion models. 🔗
DIMD - On the detection of synthetic images generated by diffusion models. 🔗
UnivFD - Towards Universal Fake Image Detectors that Generalize Across Generative Models 🔗
RINE 4 and LDM Leveraging Representations from Intermediate Encoder-blocks for Synthetic Image Detection 🔗
PatchCraft - PatchCraft: Exploring Texture Patch for Efficient AI-generated Image Detection 🔗

Usage

Run a Demo

To evaluate all processing methods on an image using a specific model run: python demo.py --method method --parameter parameter --image_path image

Evaluation

To evaluate a single model using a specific processing method (resize, centercrop, tencrop, or texture_crop): python val.py --processing_method processing_method --method method --parameter parameter

To evaluate all models using all processing methods: bash val.sh

Ablations study

To reproduce the ablation study run: bash ablations.sh.

Contact

Despina Konstantinidou ([email protected])

About

This repository contains the implementation code for the paper: TextureCrop: Enhancing Synthetic Image Detection through Texture-based Cropping.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published