Hybrid neural network model is protected against adversarial attacks using either adversarial training or randomization defense techniques
-
Updated
Sep 4, 2024 - Jupyter Notebook
Hybrid neural network model is protected against adversarial attacks using either adversarial training or randomization defense techniques
Hybrid neural network is protected against adversarial attacks using various defense techniques, including input transformation, randomization, and adversarial training.
Add a description, image, and links to the random-cropping topic page so that developers can more easily learn about it.
To associate your repository with the random-cropping topic, visit your repo's landing page and select "manage topics."