Procedures to Analyze and Design Decentralized, Distributed Sensor Networks for Change Detection
The emergence of the Internet of Things (IoT) has reinvigorated interest into the distributed quickest change detection problem within wireless sensor networks. Although a number of system designs addressing this problem exist in literature, the vast scale of IoT has highlighted limitations of existing methods when adapted to large networks. The purpose of this thesis is therefore to investigate and propose both system designs and design methodologies that perform well in large networks. Numerical analysis techniques are developed that allow for accurate threshold design of the Sequential Probability Ratio Test (SPRT) and Cumulative-Sum (CUSUM) procedures with respect to desired error metrics. Tests designed using these procedures are compared with those using Wald’s approximation to highlight the impact of sequential test overshoot on test design. These techniques are shown to also provide insight into the operation of these procedures over time. Leveraging these techniques, two system designs are proposed to solve the distributed quickest change detection problem. With large sensor networks in mind, multiple simultaneous transmissions within the network are permitted and analyzed uniquely by a fusion center (FC), and the problem of limited bandwidth is considered. These designs use the CUSUM procedure at local sensors to quantize local sensor observations into binary summary reports that are transmitted to the FC, indicating the outcome of each local sensor's CUSUM in each time slot. Probability mass functions describing the probability of the fusion center receiving one or more reports from local sensors in each time slot are computed using the developed CUSUM analysis techniques. Accurate characterization of the local sensor reporting process allows the fusion center to implement procedures based on Bayesian and minimax formulations to the quickest change detection problem. It is shown that using the minimax-based system design may perform well when a number of assumptions are satisfied. This design is capable of scaling to large networks and a methodology by which global and local thresholds may be chosen to meet a desired false alarm rate constraint is proposed. It is also shown that the performance of this design can be numerically computed for different choices of system design variables.
URI for this recordhttp://hdl.handle.net/1974/26617
Request an alternative formatIf you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre
The following license files are associated with this item: