• Login
    View Item 
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Cognitive Solutions for Resource Management in Wireless Sensor Networks

    Thumbnail
    View/Open
    ElMougy_Amr_H_201302_PhD.pdf (3.318Mb)
    Date
    2013-02-05
    Author
    El Mougy, Amr
    Metadata
    Show full item record
    Abstract
    Wireless Sensor Networks (WSN) is an important technology that can be used to provide new data sets for many applications ranging from healthcare monitoring to military surveillance. Due to the increasing popularity of WSNs, user demands have evolved as well. To achieve the end-to-end goals and requirements of the applications, managing the resources of the network becomes a critical task. Cognitive networking techniques for resource management have been proposed in recent years to provide performance gains over traditional design methodologies. However, even though several tools have been considered in cognitive network design, they show limitations in their adaptability, complexity, and their ability to consider multiple conflicting goals. Thus, this thesis proposes novel cognitive solutions for WSNs that include a reasoning machine and a learning protocol. Weighted Cognitive Maps (WCM) and Q-Learning are identified as suitable tools for addressing the aforementioned challenges and designing the cognitive solutions due to their ability to consider conflicting objectives with low complexity.

    WCM is a mathematical tool that has powerful inference capabilities. Thus, they are used to design a reasoning machine for WSNs. Two case studies are proposed in this thesis that illustrate the capabilities of WCMs and their flexibility in supporting different application requirements and network types. In addition, an elaborate theoretical model based on Markov Chains (MC) is proposed to analyze the operation of the WCM system. Extensive computer simulations and analytical results show the ability of the WCM system to achieve the end-to-end goals of the network and find compromises between conflicting constraints.

    On the other hand, Q-Learning is a well known reinforcement learning algorithm that is used to evaluate the actions taken by an agent over time. Thus, it is used to design a learning protocol that improves the performance of the WCM system. Furthermore, to ensure that the learning protocol operates efficiently, methods for improving the learning speed and achieving distributed learning across multiple nodes are proposed as well. Extensive computer simulations show that the learning protocol improves the performance of the WCM system in several metrics.
    URI for this record
    http://hdl.handle.net/1974/7804
    Collections
    • Queen's Graduate Theses and Dissertations
    • Department of Electrical and Computer Engineering Graduate Theses
    Request an alternative format
    If you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV
     

     

    Browse

    All of QSpaceCommunities & CollectionsPublished DatesAuthorsTitlesSubjectsTypesThis CollectionPublished DatesAuthorsTitlesSubjectsTypes

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV