Show simple item record

dc.contributor.authorFoo, King Chun (Derek)en
dc.date2011-01-31 15:53:02.732
dc.date.accessioned2011-01-31T21:59:05Z
dc.date.available2011-01-31T21:59:05Z
dc.date.issued2011-01-31T21:59:05Z
dc.identifier.urihttp://hdl.handle.net/1974/6292
dc.descriptionThesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-01-31 15:53:02.732en
dc.description.abstractPerformance regression refers to the phenomena where the application performance degrades compared to prior releases. Performance regressions are unwanted side-effects caused by changes to application or its execution environment. Previous research shows that most problems experienced by customers in the field are related to application performance. To reduce the likelihood of performance regressions slipping into production, software vendors must verify the performance of an application before its release. The current practice of performance verification is carried out only at the implementation level through performance tests. In a performance test, service requests with intensity similar to the production environment are pushed to the applications under test; various performance counters (e.g., CPU utilization) are recorded. Analysis of the results of performance verification is both time-consuming and error-prone due to the large volume of collected data, the absence of formal objectives and the subjectivity of performance analysts. Furthermore, since performance verification is done just before release, evaluation of high impact design changes is delayed until the end of the development lifecycle. In this thesis, we seek to improve the effectiveness of performance verification. First, we propose an approach to construct layered simulation models to support performance verification at the design level. Performance analysts can leverage our layered simulation models to evaluate the impact of a proposed design change before any development effort is committed. Second, we present an automated approach to detect performance regressions from results of performance tests conducted on the implementation of an application. Our approach compares the results of new tests against counter correlations extracted from performance testing repositories. Finally, we refine our automated analysis approach with ensemble-learning algorithms to evaluate performance tests conducted in heterogeneous software and hardware environments.en
dc.language.isoengen
dc.relation.ispartofseriesCanadian thesesen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.subjectperformance regressionen
dc.subjectperformance verificationen
dc.titleAutomated discovery of performance regressions in enterprise applicationsen
dc.typethesisen
dc.description.degreeM.A.Sc.en
dc.contributor.supervisorZou, Yingen
dc.contributor.supervisorHassan, Ahmed E.en
dc.contributor.departmentElectrical and Computer Engineeringen
dc.degree.grantorQueen's University at Kingstonen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record