
Defence methods for image adversarial attacks
In the previous post, we reviewed some well-known methods for black-box decision-based adversarial attacks where the adversary has no knowledge about the victim model except for its discrete hard-label predictions. Thus gradient-based methods become ineffective but simple random-walk-based methods such as the Boundary Attack can still represent a threat even under these particular settings. Now that we have introduced both white and black-box attacks under … Continue reading Defence methods for image adversarial attacks