Queen's University - Utility Bar

QSpace at Queen's University >
Theses, Dissertations & Graduate Projects >
Queen's Theses & Dissertations >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1974/6292

Title: Automated discovery of performance regressions in enterprise applications
Authors: Foo, King Chun (Derek)

Files in This Item:

File Description SizeFormat
thesis_final.pdf1.56 MBAdobe PDFView/Open
Keywords: performance regression
performance verification
Issue Date: 2011
Series/Report no.: Canadian theses
Abstract: Performance regression refers to the phenomena where the application performance degrades compared to prior releases. Performance regressions are unwanted side-effects caused by changes to application or its execution environment. Previous research shows that most problems experienced by customers in the field are related to application performance. To reduce the likelihood of performance regressions slipping into production, software vendors must verify the performance of an application before its release. The current practice of performance verification is carried out only at the implementation level through performance tests. In a performance test, service requests with intensity similar to the production environment are pushed to the applications under test; various performance counters (e.g., CPU utilization) are recorded. Analysis of the results of performance verification is both time-consuming and error-prone due to the large volume of collected data, the absence of formal objectives and the subjectivity of performance analysts. Furthermore, since performance verification is done just before release, evaluation of high impact design changes is delayed until the end of the development lifecycle. In this thesis, we seek to improve the effectiveness of performance verification. First, we propose an approach to construct layered simulation models to support performance verification at the design level. Performance analysts can leverage our layered simulation models to evaluate the impact of a proposed design change before any development effort is committed. Second, we present an automated approach to detect performance regressions from results of performance tests conducted on the implementation of an application. Our approach compares the results of new tests against counter correlations extracted from performance testing repositories. Finally, we refine our automated analysis approach with ensemble-learning algorithms to evaluate performance tests conducted in heterogeneous software and hardware environments.
Description: Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-01-31 15:53:02.732
URI: http://hdl.handle.net/1974/6292
Appears in Collections:Electrical and Computer Engineering Graduate Theses
Queen's Theses & Dissertations

Items in QSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

 

  DSpace Software Copyright © 2002-2008  The DSpace Foundation - TOP