A Comprehensive Evaluation Framework for Deep Model Robustness

By Jun G. et al
Published on Nov. 2, 2022
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
Introduction
Related Work
Evaluation Metrics
- Data-Oriented Evaluation Metrics
- Model-oriented Evaluation Metrics
Conclusion
References

Summary

The document presents a comprehensive evaluation framework for assessing deep model robustness against adversarial examples. It introduces 23 evaluation metrics divided into data-oriented and model-oriented categories. The data-oriented metrics focus on neuron coverage and data imperceptibility, while model-oriented metrics evaluate model behaviors in the adversarial setting. Through experiments on various datasets and models, the framework aims to provide deep insights into building robust models. The document also discusses the significance of proper evaluation in navigating the field of deep learning and enhancing model robustness. Overall, it contributes to establishing a structured approach for evaluating deep model robustness.
×
This is where the content will go.