Automated Approaches for Reducing the Execution Time of Performance Tests
Performance issues are one of the primary causes of failures in today’s large-scale software systems. Hence, performance testing has become an essential software quality assurance tool. However, performance testing faces many challenges. One challenge is determining how long a performance test must run. Performance tests often run for hours or days to uncover performance issues (e.g., memory leaks). However, much of the data that is generated during a performance test is repetitive. Performance testers can stop their performance tests (to reduce the time to market and the costs of performance testing) when the test no longer offers new information about the system’s performance. Therefore, in this thesis, we propose two automated approaches that reduce the execution time of performance tests by measuring the repetitiveness in the performance metrics that are collected during a test. The first approach measures repetitiveness in the values of the performance metric, e.g., CPU utilization, during a performance test. Then, the approach provides a recommendation to stop the test when the generated performance metric values become highly repetitive and the repetitiveness is stabilized (i.e., relatively little new information about the systems’ performance is generated). The second approach also recommends the stopping of a performance test, by measuring the repetitiveness in the inter-metrics relations. Our second approach combines the values of the performance metrics that are generated at a given time into a system state. For instance, a system state can be the combination of a low CPU utilization value and a high response time value. Our second approach recommends a stopping time when the generated system states during a performance test become highly repetitive. We performed experiments on three open source systems (i.e., CloudStore, Pet- Clinic and Dell DVD Store). Our experiments show that our first approach reduces the duration of 24-hour performance tests by up to 75% while capturing more than 91.9% of the collected performance metric values. In addition, our approach recommends a stopping time that is close to the most cost-effective stopping time (i.e., the stopping time that minimizes the duration of a test while maximizing the amount of information about the system’s performance as provided by the performance test). Our second approach was also evaluated using the same set of systems. We find that the second approach saves approximately 70% of the execution time while preserving at least 95% of the collected system states during a 24-hour time period. Our comparison of the two approaches reveals that the second approach recommends the stopping of tests at more cost-effective times, in comparison to our first approach.
URI for this recordhttp://hdl.handle.net/1974/15378
Request an alternative formatIf you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre
The following license files are associated with this item: