Please use this identifier to cite or link to this item:
http://dspace.cityu.edu.hk/handle/2031/9512
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lou, Yang | en_US |
dc.date.accessioned | 2021-12-10T08:57:43Z | - |
dc.date.available | 2021-12-10T08:57:43Z | - |
dc.date.issued | 2021 | en_US |
dc.identifier.other | 2021csly810 | en_US |
dc.identifier.uri | http://dspace.cityu.edu.hk/handle/2031/9512 | - |
dc.description.abstract | Recent advancements in deep learning have led to the considerable development of autonomous driving. Nonetheless, previous studies have shown that deep neural networks are extremely vulnerable to adversarial attacks that may compromise autonomous driving safety. Specifically, recent studies show that deep learning-based 3D object detection models, a crucial component for vision-based autonomous driving, can be significantly compromised by adversarial attacks. Although driving safety is the ultimate concern for autonomous driving, there is a lack of comprehensive studies on the linkage between deep learning models' performance and the driving safety of autonomous vehicles under adversarial attacks. In this project, we investigate the attack impact of two primary types of adversarial attacks: perturbation attack and patch attack, on driving safety of vision-based autonomous driving systems rather than the perspective of detection precision of deep learning models. In particular, we target two leading models in this domain, DSGN and Stereo-RCNN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. The experiment results expose that (1) the attack impact on the precision of 3D object detectors and the attack impact on the driving safety are decoupled, and (2) the DSGN model demonstrates stronger robustness against adversarial attacks than Stereo-RCNN. We also conduct an ablation study to analyze the causes behind two findings of the results. This project's findings may provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving. | en_US |
dc.rights | This work is protected by copyright. Reproduction or distribution of the work in any format is prohibited without written permission of the copyright owner. | en_US |
dc.rights | Access is restricted to CityU users. | en_US |
dc.title | Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles | en_US |
dc.contributor.department | Department of Computer Science | en_US |
dc.description.supervisor | Supervisor: Prof. Wang, Jianping; First Reader: Dr. Li, Zhenjiang; Second Reader: Dr. Chan, Mang Tang | en_US |
Appears in Collections: | Computer Science - Undergraduate Final Year Projects |
Files in This Item:
File | Size | Format | |
---|---|---|---|
fulltext.html | 147 B | HTML | View/Open |
Items in Digital CityU Collections are protected by copyright, with all rights reserved, unless otherwise indicated.