Skip navigation
Run Run Shaw Library City University of Hong KongRun Run Shaw Library

Please use this identifier to cite or link to this item: http://dspace.cityu.edu.hk/handle/2031/9512
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLou, Yangen_US
dc.date.accessioned2021-12-10T08:57:43Z-
dc.date.available2021-12-10T08:57:43Z-
dc.date.issued2021en_US
dc.identifier.other2021csly810en_US
dc.identifier.urihttp://dspace.cityu.edu.hk/handle/2031/9512-
dc.description.abstractRecent advancements in deep learning have led to the considerable development of autonomous driving. Nonetheless, previous studies have shown that deep neural networks are extremely vulnerable to adversarial attacks that may compromise autonomous driving safety. Specifically, recent studies show that deep learning-based 3D object detection models, a crucial component for vision-based autonomous driving, can be significantly compromised by adversarial attacks. Although driving safety is the ultimate concern for autonomous driving, there is a lack of comprehensive studies on the linkage between deep learning models' performance and the driving safety of autonomous vehicles under adversarial attacks. In this project, we investigate the attack impact of two primary types of adversarial attacks: perturbation attack and patch attack, on driving safety of vision-based autonomous driving systems rather than the perspective of detection precision of deep learning models. In particular, we target two leading models in this domain, DSGN and Stereo-RCNN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. The experiment results expose that (1) the attack impact on the precision of 3D object detectors and the attack impact on the driving safety are decoupled, and (2) the DSGN model demonstrates stronger robustness against adversarial attacks than Stereo-RCNN. We also conduct an ablation study to analyze the causes behind two findings of the results. This project's findings may provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.en_US
dc.rightsThis work is protected by copyright. Reproduction or distribution of the work in any format is prohibited without written permission of the copyright owner.en_US
dc.rightsAccess is restricted to CityU users.en_US
dc.titleEvaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehiclesen_US
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.description.supervisorSupervisor: Prof. Wang, Jianping; First Reader: Dr. Li, Zhenjiang; Second Reader: Dr. Chan, Mang Tangen_US
Appears in Collections:Computer Science - Undergraduate Final Year Projects 

Files in This Item:
File SizeFormat 
fulltext.html147 BHTMLView/Open
Show simple item record


Items in Digital CityU Collections are protected by copyright, with all rights reserved, unless otherwise indicated.

Send feedback to Library Systems
Privacy Policy | Copyright | Disclaimer