Close Menu
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Cloud IBR Expands Automated Catastrophe Restoration from Object Storage

February 6, 2026

Suffescom Expands AI Capabilities with Launch of AI Companion Platform

February 6, 2026

Daytona Raises $24M Collection A to Give Each Agent a Pc

February 6, 2026
Facebook X (Twitter) Instagram
Smart Homez™
Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
SUBSCRIBE
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics
Smart Homez™
Home»Deep Learning»A Coding Information to Exhibit Focused Information Poisoning Assaults in Deep Studying by Label Flipping on CIFAR-10 with PyTorch
Deep Learning

A Coding Information to Exhibit Focused Information Poisoning Assaults in Deep Studying by Label Flipping on CIFAR-10 with PyTorch

Editorial TeamBy Editorial TeamJanuary 11, 2026Updated:January 11, 2026No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
A Coding Information to Exhibit Focused Information Poisoning Assaults in Deep Studying by Label Flipping on CIFAR-10 with PyTorch
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


On this tutorial, we exhibit a sensible information poisoning assault by manipulating labels within the CIFAR-10 dataset and observing its affect on mannequin habits. We assemble a clear and a poisoned coaching pipeline aspect by aspect, utilizing a ResNet-style convolutional community to make sure steady, comparable studying dynamics. By selectively flipping a fraction of samples from a goal class to a malicious class throughout coaching, we present how delicate corruption within the information pipeline can propagate into systematic misclassification at inference time. Take a look at the FULL CODES right here.

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torch.utils.information import DataLoader, Dataset
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix, classification_report


CONFIG = {
   "batch_size": 128,
   "epochs": 10,
   "lr": 0.001,
   "target_class": 1,
   "malicious_label": 9,
   "poison_ratio": 0.4,
}


torch.manual_seed(42)
np.random.seed(42)

We arrange the core surroundings required for the experiment and outline all international configuration parameters in a single place. We guarantee reproducibility by fixing random seeds throughout PyTorch and NumPy. We additionally explicitly choose the compute machine so the tutorial runs effectively on each CPU and GPU. Take a look at the FULL CODES right here.

class PoisonedCIFAR10(Dataset):
   def __init__(self, original_dataset, target_class, malicious_label, ratio, is_train=True):
       self.dataset = original_dataset
       self.targets = np.array(original_dataset.targets)
       self.is_train = is_train
       if is_train and ratio > 0:
           indices = np.the place(self.targets == target_class)[0]
           n_poison = int(len(indices) * ratio)
           poison_indices = np.random.selection(indices, n_poison, change=False)
           self.targets[poison_indices] = malicious_label


   def __getitem__(self, index):
       img, _ = self.dataset[index]
       return img, self.targets[index]


   def __len__(self):
       return len(self.dataset)

We implement a customized dataset wrapper that permits managed label poisoning throughout coaching. We selectively flip a configurable fraction of samples from the goal class to a malicious class whereas conserving the check information untouched. We protect the unique picture information in order that solely label integrity is compromised. Take a look at the FULL CODES right here.

def get_model():
   mannequin = torchvision.fashions.resnet18(num_classes=10)
   mannequin.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
   mannequin.maxpool = nn.Id()
   return mannequin.to(CONFIG["device"])


def train_and_evaluate(train_loader, description):
   mannequin = get_model()
   optimizer = optim.Adam(mannequin.parameters(), lr=CONFIG["lr"])
   criterion = nn.CrossEntropyLoss()
   for _ in vary(CONFIG["epochs"]):
       mannequin.practice()
       for pictures, labels in train_loader:
           pictures = pictures.to(CONFIG["device"])
           labels = labels.to(CONFIG["device"])
           optimizer.zero_grad()
           outputs = mannequin(pictures)
           loss = criterion(outputs, labels)
           loss.backward()
           optimizer.step()
   return mannequin

We outline a light-weight ResNet-based mannequin tailor-made for CIFAR-10 and implement the complete coaching loop. We practice the community utilizing customary cross-entropy loss and Adam optimization to make sure steady convergence. We maintain the coaching logic similar for clear and poisoned information to isolate the impact of knowledge poisoning. Take a look at the FULL CODES right here.

def get_predictions(mannequin, loader):
   mannequin.eval()
   preds, labels_all = [], []
   with torch.no_grad():
       for pictures, labels in loader:
           pictures = pictures.to(CONFIG["device"])
           outputs = mannequin(pictures)
           _, predicted = torch.max(outputs, 1)
           preds.lengthen(predicted.cpu().numpy())
           labels_all.lengthen(labels.numpy())
   return np.array(preds), np.array(labels_all)


def plot_results(clean_preds, clean_labels, poisoned_preds, poisoned_labels, lessons):
   fig, ax = plt.subplots(1, 2, figsize=(16, 6))
   for i, (preds, labels, title) in enumerate([
       (clean_preds, clean_labels, "Clean Model Confusion Matrix"),
       (poisoned_preds, poisoned_labels, "Poisoned Model Confusion Matrix")
   ]):
       cm = confusion_matrix(labels, preds)
       sns.heatmap(cm, annot=True, fmt="d", cmap="Blues", ax=ax[i],
                   xticklabels=lessons, yticklabels=lessons)
       ax[i].set_title(title)
   plt.tight_layout()
   plt.present()

We run inference on the check set and gather predictions for quantitative evaluation. We compute confusion matrices to visualise class-wise habits for each clear and poisoned fashions. We use these visible diagnostics to spotlight focused misclassification patterns launched by the assault. Take a look at the FULL CODES right here.

rework = transforms.Compose([
   transforms.RandomHorizontalFlip(),
   transforms.ToTensor(),
   transforms.Normalize((0.4914, 0.4822, 0.4465),
                        (0.2023, 0.1994, 0.2010))
])


base_train = torchvision.datasets.CIFAR10(root="./information", practice=True, obtain=True, rework=rework)
base_test = torchvision.datasets.CIFAR10(root="./information", practice=False, obtain=True, rework=rework)


clean_ds = PoisonedCIFAR10(base_train, CONFIG["target_class"], CONFIG["malicious_label"], ratio=0)
poison_ds = PoisonedCIFAR10(base_train, CONFIG["target_class"], CONFIG["malicious_label"], ratio=CONFIG["poison_ratio"])


clean_loader = DataLoader(clean_ds, batch_size=CONFIG["batch_size"], shuffle=True)
poison_loader = DataLoader(poison_ds, batch_size=CONFIG["batch_size"], shuffle=True)
test_loader = DataLoader(base_test, batch_size=CONFIG["batch_size"], shuffle=False)


clean_model = train_and_evaluate(clean_loader, "Clear Coaching")
poisoned_model = train_and_evaluate(poison_loader, "Poisoned Coaching")


c_preds, c_true = get_predictions(clean_model, test_loader)
p_preds, p_true = get_predictions(poisoned_model, test_loader)


plot_results(c_preds, c_true, p_preds, p_true, lessons)


print(classification_report(c_true, c_preds, target_names=lessons, labels=[1]))
print(classification_report(p_true, p_preds, target_names=lessons, labels=[1]))

We put together the CIFAR-10 dataset, assemble clear and poisoned dataloaders, and execute each coaching pipelines finish to finish. We consider the educated fashions on a shared check set to make sure a good comparability. We finalize the evaluation by reporting class-specific precision and recall to reveal the affect of poisoning on the focused class.

In conclusion, we noticed how label-level information poisoning degrades class-specific efficiency with out essentially destroying general accuracy. We analyzed this habits utilizing confusion matrices and per-class classification studies, which reveal focused failure modes launched by the assault. This experiment reinforces the significance of knowledge provenance, validation, and monitoring in real-world machine studying techniques, particularly in safety-critical domains.


Take a look at the FULL CODES right here. Additionally, be at liberty to comply with us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you may be part of us on telegram as effectively.

Take a look at our newest launch of ai2025.dev, a 2025-focused analytics platform that turns mannequin launches, benchmarks, and ecosystem exercise right into a structured dataset you may filter, evaluate, and export.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.



Supply hyperlink

Editorial Team
  • Website

Related Posts

How Tree-KG Allows Hierarchical Information Graphs for Contextual Navigation and Explainable Multi-Hop Reasoning Past Conventional RAG

January 27, 2026

Meet ‘kvcached’: A Machine Studying Library to Allow Virtualized, Elastic KV Cache for LLM Serving on Shared GPUs

October 26, 2025

Microsoft Analysis Releases Skala: a Deep-Studying Alternate–Correlation Practical Focusing on Hybrid-Stage Accuracy at Semi-Native Value

October 10, 2025
Misa
Trending
Machine-Learning

Cloud IBR Expands Automated Catastrophe Restoration from Object Storage

By Editorial TeamFebruary 6, 20260

New compatibility lets MSPs flip low-cost object storage into recovery-ready infrastructure with out pre-staged {hardware}…

Suffescom Expands AI Capabilities with Launch of AI Companion Platform

February 6, 2026

Daytona Raises $24M Collection A to Give Each Agent a Pc

February 6, 2026

Bounteous Launches Claude Code Lab Sequence in Partnership with Anthropic to Speed up Accountable AI Adoption

February 6, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

Cloud IBR Expands Automated Catastrophe Restoration from Object Storage

February 6, 2026

Suffescom Expands AI Capabilities with Launch of AI Companion Platform

February 6, 2026

Daytona Raises $24M Collection A to Give Each Agent a Pc

February 6, 2026

Bounteous Launches Claude Code Lab Sequence in Partnership with Anthropic to Speed up Accountable AI Adoption

February 6, 2026

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

The Ai Today™ Magazine is the first in the middle east that gives the latest developments and innovations in the field of AI. We provide in-depth articles and analysis on the latest research and technologies in AI, as well as interviews with experts and thought leaders in the field. In addition, The Ai Today™ Magazine provides a platform for researchers and practitioners to share their work and ideas with a wider audience, help readers stay informed and engaged with the latest developments in the field, and provide valuable insights and perspectives on the future of AI.

Our Picks

Cloud IBR Expands Automated Catastrophe Restoration from Object Storage

February 6, 2026

Suffescom Expands AI Capabilities with Launch of AI Companion Platform

February 6, 2026

Daytona Raises $24M Collection A to Give Each Agent a Pc

February 6, 2026
Trending

Bounteous Launches Claude Code Lab Sequence in Partnership with Anthropic to Speed up Accountable AI Adoption

February 6, 2026

Domino Information Lab Names Former Joint Chiefs of Workers Vice Chair Admiral Christopher Grady to Board to Advance Public Sector AI Efforts

February 6, 2026

Novoslo Based by Keenan Torcato and Shannon Torcato to Assist Companies Implement Scalable AI Transformation

February 6, 2026
Facebook X (Twitter) Instagram YouTube LinkedIn TikTok
  • About Us
  • Advertising Solutions
  • Privacy Policy
  • Terms
  • Podcast
Copyright © The Ai Today™ , All right reserved.

Type above and press Enter to search. Press Esc to cancel.