Show simple item record

dc.contributor.authorAlGhamdi, Hammamen
dc.description.abstractPerformance issues are one of the primary causes of failures in today’s large-scale software systems. Hence, performance testing has become an essential software quality assurance tool. However, performance testing faces many challenges. One challenge is determining how long a performance test must run. Performance tests often run for hours or days to uncover performance issues (e.g., memory leaks). However, much of the data that is generated during a performance test is repetitive. Performance testers can stop their performance tests (to reduce the time to market and the costs of performance testing) when the test no longer offers new information about the system’s performance. Therefore, in this thesis, we propose two automated approaches that reduce the execution time of performance tests by measuring the repetitiveness in the performance metrics that are collected during a test. The first approach measures repetitiveness in the values of the performance metric, e.g., CPU utilization, during a performance test. Then, the approach provides a recommendation to stop the test when the generated performance metric values become highly repetitive and the repetitiveness is stabilized (i.e., relatively little new information about the systems’ performance is generated). The second approach also recommends the stopping of a performance test, by measuring the repetitiveness in the inter-metrics relations. Our second approach combines the values of the performance metrics that are generated at a given time into a system state. For instance, a system state can be the combination of a low CPU utilization value and a high response time value. Our second approach recommends a stopping time when the generated system states during a performance test become highly repetitive. We performed experiments on three open source systems (i.e., CloudStore, Pet- Clinic and Dell DVD Store). Our experiments show that our first approach reduces the duration of 24-hour performance tests by up to 75% while capturing more than 91.9% of the collected performance metric values. In addition, our approach recommends a stopping time that is close to the most cost-effective stopping time (i.e., the stopping time that minimizes the duration of a test while maximizing the amount of information about the system’s performance as provided by the performance test). Our second approach was also evaluated using the same set of systems. We find that the second approach saves approximately 70% of the execution time while preserving at least 95% of the collected system states during a 24-hour time period. Our comparison of the two approaches reveals that the second approach recommends the stopping of tests at more cost-effective times, in comparison to our first approach.en
dc.relation.ispartofseriesCanadian thesesen
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United Statesen
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canadaen
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreementen
dc.rightsIntellectual Property Guidelines at Queen's Universityen
dc.rightsCopying and Preserving Your Thesisen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.subjectPerformance Tests, Software Engineeringen
dc.titleAutomated Approaches for Reducing the Execution Time of Performance Testsen
dc.contributor.supervisorHassan, Ahmed E.en
dc.contributor.departmentComputingen's University at Kingstonen

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States