Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
Error code:   UnexpectedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
class label
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
0fake
End of preview.

GenAI Manipulation Detection Dataset - Interior Design Images

📋 Dataset Description

This dataset contains 1000 paired images (real + manipulated) for training and evaluating GenAI manipulation detection models. Created for the MenaML Winter School 2026 Hackathon.

Dataset Summary

  • Total Images: 1000 pairs (2000 total images)
  • Image Size: 512x512
  • Format: JPEG
  • Source: Pinterest Interior Design Images (Kaggle)
  • License: MIT

🎯 Challenge Context

This dataset was created for Track B: Real Estate & Commercial Integrity of the MenaML Winter School 2026 GenAI Detection Challenge.

The challenge focuses on detecting:

  • ✅ Virtual staging (furniture replacement)
  • ✅ Texture smoothing (wall/surface manipulation)
  • ✅ Compression artifacts
  • ✅ Splicing and copy-move forgery
  • ✅ Physical impossibilities (shadow/reflection mismatches)

📂 Dataset Structure

dataset/
├── data/
│   ├── real/              # Original unmanipulated images
│   │   ├── real_000000.jpg
│   │   ├── real_000001.jpg
│   │   └── ...
│   └── fake/              # Manipulated images
│       ├── fake_000000.jpg
│       ├── fake_000001.jpg
│       └── ...
├── annotations.csv               # ⭐ Main annotations (2 rows per pair)
├── detailed_annotations.json     # Paired format annotations
├── metadata.json                 # Dataset statistics
└── README.md                     # This file

📊 Annotations Format

CSV Annotations (annotations.csv)

Each image pair creates 2 rows - one for real, one for fake:

file_name,image_path,label,is_manipulated,manipulation_category,manipulation_technique,manipulation_description,image_id,pair_id
real_000000.jpg,data/real/real_000000.jpg,real,0,none,none,Authentic unmanipulated image,000000,000000
fake_000000.jpg,data/fake/fake_000000.jpg,fake,1,smoothness_anomaly,bilateral_filter,Unnatural smoothness in walls/surfaces,000000,000000
real_000001.jpg,data/real/real_000001.jpg,real,0,none,none,Authentic unmanipulated image,000001,000001
fake_000001.jpg,data/fake/fake_000001.jpg,fake,1,compression_artifact,double_jpeg,Double JPEG compression,000001,000001

Columns:

  • file_name: Image filename
  • image_path: Relative path to image
  • label: "real" or "fake"
  • is_manipulated: 0 (real) or 1 (fake)
  • manipulation_category: Category (or "none" for real images)
    • smoothness_anomaly
    • compression_artifact
    • frequency_manipulation
    • splicing
    • physical_impossibility
  • manipulation_technique: Specific technique (or "none" for real images)
  • manipulation_description: Human-readable description
  • image_id: Unique ID for this image
  • pair_id: ID linking real and fake pairs (same for both images in a pair)

JSON Annotations (detailed_annotations.json)

Paired format for easier processing:

[
  {
    "pair_id": "000000",
    "real_image": {
      "filename": "real_000000.jpg",
      "path": "data/real/real_000000.jpg",
      "label": "real"
    },
    "fake_image": {
      "filename": "fake_000000.jpg",
      "path": "data/fake/fake_000000.jpg",
      "label": "fake",
      "manipulation": {
        "category": "smoothness_anomaly",
        "technique": "bilateral_filter",
        "description": "Unnatural smoothness in walls/surfaces"
      }
    }
  }
]

📊 Manipulation Categories

  • compression_artifact: 191 images (19.1%)
  • smoothness_anomaly: 206 images (20.6%)
  • physical_impossibility: 192 images (19.2%)
  • frequency_manipulation: 209 images (20.9%)
  • splicing: 202 images (20.2%)

Detailed Technique Breakdown

  • compression_mismatch: 100 (10.0%)
  • bilateral_filter: 108 (10.8%)
  • texture_removal: 98 (9.8%)
  • reflection_inconsistency: 94 (9.4%)
  • upscaling: 74 (7.4%)
  • copy_move: 94 (9.4%)
  • frequency_injection: 70 (7.0%)
  • object_insertion: 108 (10.8%)
  • grid_artifact: 65 (6.5%)
  • shadow_mismatch: 98 (9.8%)
  • double_jpeg: 91 (9.1%)

🔧 Manipulation Techniques Explained

1️⃣ Smoothness Anomaly

  • bilateral_filter: Aggressive bilateral filtering creating unnatural smoothness
  • texture_removal: Edge-preserving filter that removes texture detail
  • Detection: Texture analysis, high-frequency loss detection

2️⃣ Compression Artifact

  • double_jpeg: Two rounds of JPEG compression with different quality levels
  • compression_mismatch: Regions with different compression quality
  • Detection: DCT coefficient analysis, block artifact detection

3️⃣ Frequency Manipulation

  • upscaling: Downscale→upscale creating bicubic interpolation signatures
  • frequency_injection: GAN-like ring patterns in frequency domain
  • grid_artifact: 8×8 grid patterns typical of GAN outputs
  • Detection: FFT analysis, power spectral density

4️⃣ Splicing

  • copy_move: Copy region and paste elsewhere with compression mismatch
  • object_insertion: Insert objects with different compression characteristics
  • Detection: SIFT/ORB feature matching, noise inconsistency analysis

5️⃣ Physical Impossibility

  • shadow_mismatch: Inconsistent shadow directions
  • reflection_inconsistency: Reflections not matching room layout
  • Detection: VLM reasoning, physics-based validation

💻 Usage

Load All Data

import pandas as pd
from PIL import Image
# Load annotations
df = pd.read_csv("hf://datasets/FatimahEmadEldin/genai-manipulation-detection-interior/annotations.csv")
print(f"Total images: {len(df)}")
print(f"Real images: {len(df[df['label'] == 'real'])}")
print(f"Fake images: {len(df[df['label'] == 'fake'])}")
# Show manipulation distribution (fake images only)
fakes = df[df['label'] == 'fake']
print("\nManipulation categories:")
print(fakes['manipulation_category'].value_counts())

Filter by Manipulation Type

# Get only images with specific manipulation
smoothness_fakes = df[df['manipulation_category'] == 'smoothness_anomaly']
print(f"Smoothness anomalies: {len(smoothness_fakes)}")
# Get specific technique
bilateral_images = df[df['manipulation_technique'] == 'bilateral_filter']
print(f"Bilateral filter: {len(bilateral_images)}")
# Get all compression artifacts
compression = df[df['manipulation_category'] == 'compression_artifact']
techniques = compression['manipulation_technique'].value_counts()
print(techniques)

Load Image Pairs

# Get a pair by pair_id
pair_id = "000000"
pair = df[df['pair_id'] == pair_id]
real_row = pair[pair['label'] == 'real'].iloc[0]
fake_row = pair[pair['label'] == 'fake'].iloc[0]
real_img = Image.open(f"hf://datasets/FatimahEmadEldin/genai-manipulation-detection-interior/{real_row['image_path']}")
fake_img = Image.open(f"hf://datasets/FatimahEmadEldin/genai-manipulation-detection-interior/{fake_row['image_path']}")
print(f"Real: {real_row['file_name']}")
print(f"Fake: {fake_row['file_name']}")
print(f"Manipulation: {fake_row['manipulation_technique']}")
print(f"Description: {fake_row['manipulation_description']}")

Training with Class Labels

# Create dataset with manipulation classes
fakes_only = df[df['label'] == 'fake'].copy()
# Map techniques to numeric labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
fakes_only['class_label'] = le.fit_transform(fakes_only['manipulation_technique'])
print("Class mapping:")
for i, tech in enumerate(le.classes_):
    print(f"  {i}: {tech}")
# Use for training
from torch.utils.data import Dataset
class ManipulationDataset(Dataset):
    def __init__(self, dataframe, transform=None):
        self.df = dataframe.reset_index(drop=True)
        self.transform = transform
        
    def __len__(self):
        return len(self.df)
    
    def __getitem__(self, idx):
        row = self.df.iloc[idx]
        img = Image.open(row['image_path'])
        
        if self.transform:
            img = self.transform(img)
        
        return {
            'image': img,
            'label': row['class_label'],
            'category': row['manipulation_category'],
            'technique': row['manipulation_technique']
        }

Binary Classification (Real vs Fake)

# Simple real vs fake
df['binary_label'] = df['is_manipulated']  # 0 = real, 1 = fake
# Or use the label column
df['binary_label'] = (df['label'] == 'fake').astype(int)
# Split by label
real_images = df[df['binary_label'] == 0]
fake_images = df[df['binary_label'] == 1]

Multi-task Learning

# Train on both binary and multiclass
class MultiTaskDataset(Dataset):
    def __init__(self, dataframe, technique_to_idx, transform=None):
        self.df = dataframe.reset_index(drop=True)
        self.technique_to_idx = technique_to_idx
        self.transform = transform
        
    def __getitem__(self, idx):
        row = self.df.iloc[idx]
        img = Image.open(row['image_path'])
        
        if self.transform:
            img = self.transform(img)
        
        # Binary label
        binary_label = row['is_manipulated']
        
        # Multiclass label (only for fake images)
        if binary_label == 1:
            multiclass_label = self.technique_to_idx[row['manipulation_technique']]
        else:
            multiclass_label = -1  # Ignore for real images
        
        return {
            'image': img,
            'binary_label': binary_label,
            'multiclass_label': multiclass_label,
            'category': row['manipulation_category']
        }

📝 Annotations Format

The dataset includes two annotation files for different use cases:

1. annotations.csv - Flat Format

Contains 2 entries per pair (one for real, one for fake):

Column Description Example Values
file_name Image filename real_000000.jpg, fake_000000.jpg
image_path Relative path data/real/real_000000.jpg
label Real or fake real, fake
is_manipulated Binary flag 0 (real), 1 (fake)
manipulation_category Category smoothness_anomaly, compression_artifact, etc.
manipulation_technique Specific technique bilateral_filter, double_jpeg, etc.
manipulation_description Human description "Unnatural smoothness in walls..."
image_id Unique image ID 000000
pair_id Links real/fake pairs 000000 (same for both in pair)

Use this for:

  • Quick filtering by label or technique
  • Training binary classifiers
  • Training multiclass classifiers
  • Statistical analysis

2. detailed_annotations.json - Paired Format

Contains paired structure (1 entry per pair): Use this for:

  • Contrastive learning
  • Side-by-side comparison
  • Paired image processing

Manipulation Categories

  1. smoothness_anomaly (20%)

    • bilateral_filter: Aggressive bilateral filtering
    • texture_removal: Edge-preserving texture removal
  2. compression_artifact (20%)

    • double_jpeg: Multiple JPEG compression
    • compression_mismatch: Regional quality differences
  3. frequency_manipulation (20%)

    • upscaling: Bicubic upscaling artifacts
    • frequency_injection: GAN-like frequency patterns
    • grid_artifact: 8×8 grid patterns
  4. splicing (20%)

    • copy_move: Copy-paste forgery
    • object_insertion: Object insertion with mismatches
  5. physical_impossibility (20%)

    • shadow_mismatch: Inconsistent shadow directions
    • reflection_inconsistency: Impossible reflections

🎓 Citation

If you use this dataset in your research or hackathon submission, please cite:

@dataset{genai_manipulation_interior_2026,
  title={GenAI Manipulation Detection Dataset - Interior Design},
  author={MenaML Winter School 2026 - Team ArtifactDetect},
  year={2026},
  publisher={HuggingFace},
  howpublished={\url{https://huggingface.co/datasets/FatimahEmadEldin/genai-manipulation-detection-interior}}
}

📜 License

This dataset is released under the MIT License. The source images are from the Pinterest Interior Design Images dataset on Kaggle, used under MIT license.

🏆 Hackathon Information

  • Event: MenaML Winter School 2026
  • Challenge: Detecting GenAI & Sophisticated Manipulation in Public Media
  • Track: B - Real Estate & Commercial Integrity
  • Deadline: January 28, 2026

🙏 Acknowledgments

Downloads last month
13

Models trained or fine-tuned on FatimahEmadEldin/genai-manipulation-detection-interior

Collection including FatimahEmadEldin/genai-manipulation-detection-interior