Olutimehin, Abayomi Titilola and Ajayi, Adekunbi Justina and Metibemu, Olufunke Cynthia and Balogun, Adebayo Yusuf and Oladoyinbo, Tunbosun Oyewale and Olaniyi, Oluwaseun Oladeji (2025) Adversarial Threats to AI-Driven Systems: Exploring the Attack Surface of Machine Learning Models and Countermeasures. Journal of Engineering Research and Reports, 27 (2). pp. 341-362. ISSN 2582-2926
Full text not available from this repository.Abstract
Adversarial attacks pose a critical threat to the reliability of AI-driven systems, exploiting vulnerabilities at the data, model, and deployment levels. This study employs a quantitative analysis using the CIFAR-10 Adversarial Examples Dataset from IBM’s Adversarial Robustness Toolbox and the MITRE ATLAS AI Model Vulnerabilities Dataset to assess attack success rates and attack surface exposure. A convolutional neural network (CNN) classifier was evaluated against Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Carlini & Wagner (C&W) attacks, yielding misclassification rates of 42.2%, 65.5%, and 86.8%, respectively. Statistical analysis using the Chi-Square Goodness-of-Fit Test (p < 0.001) confirmed a disproportionate targeting of model-level vulnerabilities (53.6%). These vulnerabilities pose severe risks across real-world AI applications. In cybersecurity, adversarial perturbations compromise intrusion detection systems, malware classification models, and spam filters, allowing cybercriminals to bypass AI-driven defenses. In autonomous vehicles, subtle adversarial modifications to traffic signs and road patterns can mislead AI-based navigation, increasing the likelihood of accidents. Similarly, in financial systems, adversarial attacks deceive fraud detection models, enabling unauthorized transactions and financial fraud. Countermeasure evaluation demonstrated that adversarial training provided the highest robustness gain (23.29%), while detection algorithms were least effective (15.34%). To enhance AI security, hybrid defense mechanisms integrating adversarial training with real-time anomaly detection should be prioritized, and standardized evaluation benchmarks should be established for AI security testing. These findings emphasize the necessity of hybrid AI security frameworks that combine adversarial training with real-time anomaly detection. Moreover, standardized security benchmarks should be established to ensure resilience across industries, particularly in high-stakes AI applications.
Item Type: | Article |
---|---|
Subjects: | East India Archive > Engineering |
Depositing User: | Unnamed user with email support@eastindiaarchive.com |
Date Deposited: | 17 Mar 2025 04:07 |
Last Modified: | 17 Mar 2025 04:07 |
URI: | http://article.ths100.in/id/eprint/2259 |