Reinforcement learning for cyber-physical systems with cybersecurity case studies / by Chong Li, Meikang Qiu
Material type:
- 9780367656638
- 006.31 LIR

Item type | Current library | Collection | Shelving location | Call number | Copy number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|---|---|
![]() |
KU Central Library | Rack No. : 01 Annex : 01 Shelve No. : A-03 | Reference Section (Non-Issuable Books) | 006.31 LIR 2020 (Browse shelf(Opens below)) | C-1 (NI) | Not For Loan | 52100 |
Includes index.
Section I Introduction
Chapter 1 Overview of Reinforcement Learning
Chapter 2 Overview of Cyber Physical Systems and Cybersecurity
Section II Reinforcement Learning for Cyber-Physical Systems
Chapter 3 Reinforcement Learning Problems
Chapter 4 Model based Reinforcement Learning
Chapter 5 Model free Reinforcement Learning
Chapter 6 Deep Reinforcement Learning
Section III Case Studies
Chapter 7 Reinforcement Learning for Cybersecurity
Chapter 8 Case Study: Online Cyber Attack Detection in Smart Grid
Chapter 9 Case Study: Defeat Man in the middle Attack
Reinforcement Learning for Cyber-Physical Systems: with Cybersecurity Case Studies was inspired by recent developments in the fields of reinforcement learning (RL) and cyber-physical systems (CPSs). Rooted in behavioral psychology, RL is one of the primary strands of machine learning. Different from other machine learning algorithms, such as supervised learning and unsupervised learning, the key feature of RL is its unique learning paradigm, i.e., trial-and-error. Combined with the deep neural networks, deep RL become so powerful that many complicated systems can be automatically managed by AI agents at a superhuman level. On the other hand, CPSs are envisioned to revolutionize our society in the near future. Such examples include the emerging smart buildings, intelligent transportation, and electric grids.
However, the conventional hand-programming controller in CPSs could neither handle the increasing complexity of the system, nor automatically adapt itself to new situations that it has never encountered before. The problem of how to apply the existing deep RL algorithms, or develop new RL algorithms to enable the real-time adaptive CPSs, remains open. This book aims to establish a linkage between the two domains by systematically introducing RL foundations and algorithms, each supported by one or a few state-of-the-art CPS examples to help readers understand the intuition and usefulness of RL techniques.
There are no comments on this title.