Robust and scalable deep learning for cyber-physical systems

Thumbnail Image
Date
2021-12
Authors
Esfandiari, Yasaman
Major Professor
Advisor
Sarkar, Soumik
Liu, Kevin (Jia)
Jannesari, Ali
Kelkar, Atul
Bhattacharya, Sourabh
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Journal Issue
Is Version Of
Versions
Series
Department
Mechanical Engineering
Abstract
Cyber-physical systems (CPS) have gained a lot of attention recently in design and control and have many applications including healthcare, transportation, etc. Over the past few decades, the adoption of machine learning (ML)-enabled cyber-physical systems are becoming prevalent in various sectors of modern society. Although machine (deep) learning systems have been very beneficial in many applications and made significant improvements possible in CPS, it is important to investigate the resilience and scalability of these systems as they are being used extensively in various industries and have shown failures under malicious attacks or when they are scaled up to be run at large scales. Recent focus on the robustness of deep learning algorithms to adversarial attacks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min-max optimization problem for training robust models (min step) under worst-case attacks (max step). However, they often suffer from high computational costs. Therefore, it becomes difficult to readily apply such algorithms for moderate to large size real-world data sets. To alleviate this, this report proposes a novel discrete-time dynamical system-based algorithm that aims to find the saddle point of a min-max optimization problem in the presence of uncertainties. Based on that, a fast robust training algorithm is devised which is applicable to deep neural networks. Although such training involves highly non-convex robust optimization problems, empirical results show that the algorithm can achieve significant robustness compared to other state-of-the-art robust models on benchmark data sets. The effect of adversarial attacks in Reinforcement Learning (RL) environments is also explored in this thesis. As this area has become a new center of attention for research in machine (deep) learning, it is imperative to study the performance of RL systems under malicious state and actuator attacks. In this thesis, projected gradient descent (PGD) attacks are crafted and applied to the action-space of fully trained Deep RL agents. This work shows that a well-performing agent that is initially susceptible to action space perturbations (e.g., actuator attacks) can be robustified against similar perturbations through adversarial training. Another key element in using deep learning models is to make them capable of scalability in order to make them functional in large-scale systems. To achieve this, distributed centralized learning has emerged as a class of machine (deep) learning algorithms that enables a group of collaborative learning agents to train models using a dataset distributed among the agents with the aid of a central parameter server. Recently, decentralized learning algorithms have demonstrated state-of-the-art results, comparable with centralized algorithms which makes them independent from the central parameter server. However, a key requirement to achieve such performance has been balanced distribution (among classes) of data among the agents, also referred to as IID data. In real-life applications, having a precise IID distribution of data among the agents is often not feasible. To address this, a decentralized learning algorithm is proposed, where each agent collects the gradient information from its neighboring agents and updates its model with a projected gradient. it is demonstrated in this work that this algorithm is effective on both IID and non-IID data distributions and comparisons are made against the state-of-the-art algorithms analytically and experimentally. As a real-world application, being able to use deep learning models to detect anomalies, anticipate a certain incident, and find similar trends from real-world datasets have been explored by researchers. Generally, these data can come from various sensor types installed to capture different features of the data e.g. cameras installed in agricultural fields to capture different sections of the field from various angles. The viability of distributed learning algorithms to learn from these sensors in a decentralized fashion is explored in this thesis. These algorithms are used to train autoencoders to learn from $300$ cameras installed in agricultural fields. The trained models are then used to conduct downstream tasks such as anomaly detection and image retrieval. Experimental results show that by distributing the learning tasks among sensors not only accurate models can be achieved, but learning from large datasets connected with different graph topologies would be feasible too. In Summary, this dissertation attempts to bring robustness and scalability qualities to deep learning algorithms in various settings including supervised, unsupervised, and reinforcement learning.
Comments
Description
Keywords
Citation
Source
Copyright