Empirical Studies of Performance Bugs and Performance Analysis Approaches for Software Systems
MetadataShow full item record
Developing high quality software is of eminent importance to keep the existing customers satisfied and to remain competitive. One of the most important software quality characteristics is performance, which defines how fast and/or efficiently a software can perform its operation. While several studies have shown that field problems are often due to performance issues instead of feature bugs, prior research typically treats all bugs as similar when studying various aspects of software quality (e.g., predicting the time to fix a bug) or focused on other types of bug (e.g., security bugs). There is little work that studies performance bugs. In this thesis, we perform an empirical study to quantitatively and qualitatively examine performance bugs in the Mozilla Firefox and Google Chrome web browser projects in order to find out if performance bugs are really different from other bugs in practice and to understand the rationale behind those differences. In our quantitative study, we find that performance bugs of the Firefox project take longer time to fix, are fixed by more experienced developers, and require changes to more lines of code. We also study performance bugs relative to security bugs, since security bugs have been extensively studied separately in the past. We find that security bugs are re-opened and tossed more often, are fixed and triaged faster, are fixed by more experienced developers, and are assigned more number of developers in the Firefox project. Google Chrome project also shows different quantitative characteristics between performance and non-performance bugs and from the Firefox project. Based on our quantitative results, we look at that data from a qualitative point of view. As one of our most interesting observation, we find that end-users are often frustrated with performance problems and often threaten to switch to competing software products. To better understand, the rationale for some users being very frustrated (even threatening to switch product) even though most systems are well tested, we performed an additional study. In this final study, we explore a global perspective vs a user centric perspective of analyzing performance data. We find that a user-centric perspective might lead to a small number of users with considerably poor performance while the global perspective might show good or same performance across releases. The results of our studies show that performance bugs are different and should be studied separately in large scale software systems to improve the quality assurance processes related to software performance.
Showing items related by title, author, creator and subject.
Run-time Predictive Modeling of Power and Performance via Time-Series in High Performance Computing Zamani, Reza (2012-11-13)Pressing demands for less power consumption of processors while delivering higher performance levels have put an extra attention on efficiency of the systems. Efficient management of resources in the current computing ...
Observations and Performances with Distinction by Physical Therapy Students in Clinical Education: Analysis of Check-boxes on the Physical Therapist Clinical Performance Instrument (PT-CPI) Over a 4 Year Period. Norman, Kathleen E; Booth, Randy (University of Toronto Press, 2015-01)Purpose: To describe how often the 24 performance criteria of the Physical Therapist Clinical Performance Instrument (PT-CPI) were not observed and how often they were rated exceptionally well for physical therapy (PT) ...
RASHTI, Mohammad Javad (2011-01-26)High Performance Computing (HPC) is the key to solving many scientific, financial, and engineering problems. Computer clusters are now the dominant architecture for HPC. The scale of clusters, both in terms of processor ...