• Login
    View Item 
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Automated discovery of performance regressions in enterprise applications

    Thumbnail
    View/Open
    thesis_final.pdf (1.524Mb)
    Date
    2011-01-31
    Author
    Foo, King Chun (Derek)
    Metadata
    Show full item record
    Abstract
    Performance regression refers to the phenomena where the application performance degrades compared to prior releases. Performance regressions are unwanted side-effects caused by changes to application or its execution environment. Previous research shows that most problems experienced by customers in the field are related to application performance. To reduce the likelihood of performance regressions slipping into production, software vendors must verify the performance of an application before its release. The current practice of performance verification is carried out only at the implementation level through performance tests. In a performance test, service requests with intensity similar to the production environment are pushed to the applications under test; various performance counters (e.g., CPU utilization) are recorded. Analysis of the results of performance verification is both time-consuming and error-prone due to the large volume of collected data, the absence of formal objectives and the subjectivity of performance analysts. Furthermore, since performance verification is done just before release, evaluation of high impact design changes is delayed until the end of the development lifecycle. In this thesis, we seek to improve the effectiveness of performance verification. First, we propose an approach to construct layered simulation models to support performance verification at the design level. Performance analysts can leverage our layered simulation models to evaluate the impact of a proposed design change before any development effort is committed. Second, we present an automated approach to detect performance regressions from results of performance tests conducted on the implementation of an application. Our approach compares the results of new tests against counter correlations extracted from performance testing repositories. Finally, we refine our automated analysis approach with ensemble-learning algorithms to evaluate performance tests conducted in heterogeneous software and hardware environments.
    URI for this record
    http://hdl.handle.net/1974/6292
    Collections
    • Queen's Graduate Theses and Dissertations
    • Department of Electrical and Computer Engineering Graduate Theses
    Request an alternative format
    If you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV
     

     

    Browse

    All of QSpaceCommunities & CollectionsPublished DatesAuthorsTitlesSubjectsTypesThis CollectionPublished DatesAuthorsTitlesSubjectsTypes

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV