|
![]() |
|||
|
||||
OverviewThis monograph presents a comprehensive exploration of Reverse Engineering of Deceptions (RED) in the field of adversarial machine learning. It delves into the intricacies of machine and human-centric attacks, providing a holistic understanding of how adversarial strategies can be reverse-engineered to safeguard AI systems. For machine-centric attacks, reverse engineering methods for pixel-level perturbations are covered, as well as adversarial saliency maps and victim model information in adversarial examples. In the realm of human-centric attacks, the focus shifts to generative model information inference and manipulation localization from generated images. In this work, a forward-looking perspective on the challenges and opportunities associated with RED are presented. In addition, foundational and practical insights in the realms of AI security and trustworthy computer vision are provided. Full Product DetailsAuthor: Yuguang Yao , Vishal Asnani , Jiancheng Liu , Xiaoming LiuPublisher: now publishers Inc Imprint: now publishers Inc Weight: 0.170kg ISBN: 9781638283409ISBN 10: 1638283400 Pages: 112 Publication Date: 26 March 2024 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Active Availability: In Print ![]() This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us. Table of Contents1. Introduction 2. Reverse Engineering of Adversarial Examples 3. Model Parsing via Adversarial Examples 4. Reverse Engineering of Generated Images 5. Manipulation Localization of Generated Images 6. Conclusion and Discussion ReferencesReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |