Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers. However, methods based on RS require augmenting data with large amounts of noise, which leads to significant drops in accuracy.
We propose a training-free, modified smoothing approach, Smooth-Reduce, that leverages patching and aggregation to provide improved classifier certificates. Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input. We study two aggregation schemes --- max and mean --- and show that both approaches provide better certificates in terms of certified accuracy, average certified radii and abstention rates as compared to concurrent approaches. We also provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods that require expensive retraining. Further, we extend our approach to videos and provide meaningful certificates for video classifiers.
@article{joshi2022smoothreduce,
author = {Joshi, Ameya and Pham, Minh and Cho, Minsu and Boystov, Leo and Condessa, Filipe and Kolter, J. Zico and Hegde, Chinmay},
title = {Smooth-Reduce: Leveraging Patches for Improved Certified Robustness},
journal = {arXiV preprint },
year = {2022},
}