• Login
    View Item 
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Automated Analysis of Load Testing Results

    Thumbnail
    View/Open
    Jiang_ZhenMing_201301_PhD.pdf (2.778Mb)
    Date
    2013-01-29
    Author
    Jiang, Zhen Ming
    Metadata
    Show full item record
    Abstract
    Many software systems must be load tested to ensure that they can scale up under high load while maintaining functional and non-functional requirements. Studies show that field problems are often related to systems not scaling to field workloads instead of feature bugs. To assure the quality of these systems, load testing is a required testing procedure in addition to conventional functional testing procedures, such as unit and integration testing. Current industrial practices for checking the results of a load test remain ad-hoc, involving high-level manual checks. Few research efforts are devoted to the automated analysis of load testing results, mainly due to the limited access to large scale systems for use as case studies. Approaches for the automated and systematic analysis of load tests are needed, as many services are being offered online to an increasing number of users. This dissertation proposes automated approaches to assess the quality of a system under load by mining some of the recorded load testing data (execution logs). Execution logs, which are readily available yet rarely used, are generated by output statements which developers insert into the source code. Execution logs are hard to parse and analyze automatically due to their free-form structure. We first propose a log abstraction approach that uncovers the internal structure of each log line. Then we propose automated approaches to assess the quality of a system under load by deriving various models (functional, performance and reliability models) from the large set of execution logs. Case studies show that our approaches scale well to large enterprise and open source systems and output high precision results that help load testing practitioners effectively analyze the quality of the system under load.
    URI for this record
    http://hdl.handle.net/1974/7775
    Collections
    • Queen's Graduate Theses and Dissertations
    • School of Computing Graduate Theses
    Request an alternative format
    If you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV
     

     

    Browse

    All of QSpaceCommunities & CollectionsPublished DatesAuthorsTitlesSubjectsTypesThis CollectionPublished DatesAuthorsTitlesSubjectsTypes

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV