Resilient Machine Learning Frameworks: Strategies for Mitigating Data Poisoning Vulnerabilities

Authors

  • Derek McAuley School of Computer Science, University of Nottingham, UK

Abstract

Data poisoning attacks pose significant threats to the integrity of machine learning models, compromising their accuracy and reliability. This paper explores various strategies to develop resilient machine learning frameworks capable of mitigating data poisoning vulnerabilities. By examining existing literature and proposing a multi-layered defense approach, we highlight the importance of robust data handling, anomaly detection, and model training techniques. Our findings suggest that integrating these strategies can enhance the robustness of machine learning systems against malicious interference.

Downloads

Published

2024-11-04

Issue

Section

Articles