Queen's University - Utility Bar

QSpace at Queen's University >
Graduate Theses, Dissertations and Projects >
Queen's Graduate Theses and Dissertations >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1974/7688

Authors: Thomas, STEPHEN

Files in This Item:

File Description SizeFormat
Thomas_Stephen_W_201212_PhD.pdf3.84 MBAdobe PDFView/Open
Keywords: empirical studies
mining software repositories
data mining
machine learning
software engineering
information retrieval
Issue Date: 12-Dec-2012
Series/Report no.: Canadian theses
Abstract: Mining Software Repositories, which is the process of analyzing the data related to software development practices, is an emerging field which aims to aid development teams in their day to day tasks. However, data in many software repositories is currently unused because the data is unstructured, and therefore difficult to mine and analyze. Information Retrieval (IR) techniques, which were developed specifically to handle unstructured data, have recently been used by researchers to mine and analyze the unstructured data in software repositories, with some success. The main contribution of this thesis is the idea that the research and practice of using IR models to mine unstructured software repositories can be improved by going beyond the current state of affairs. First, we propose new applications of IR models to existing software engineering tasks. Specifically, we present a technique to prioritize test cases based on their IR similarity, giving highest priority to those test cases that are most dissimilar. In another new application of IR models, we empirically recover how developers use their mailing list while developing software. Next, we show how the use of advanced IR techniques can improve results. Using a framework for combining disparate IR models, we find that bug localization performance can be improved by 14–56% on average, compared to the best individual IR model. In addition, by using topic evolution models on the history of source code, we can uncover the evolution of source code concepts with an accuracy of 87–89%. Finally, we show the risks of current research, which uses IR models as black boxes without fully understanding their assumptions and parameters. We show that data duplication in source code has undesirable effects for IR models, and that by eliminating the duplication, the accuracy of IR models improves. Additionally, we find that in the bug localization task, an unwise choice of parameter values results in an accuracy of only 1%, where optimal parameters can achieve an accuracy of 55%. Through empirical case studies on real-world systems, we show that all of our proposed techniques and methodologies significantly improve the state-of-the-art.
Description: Thesis (Ph.D, Computing) -- Queen's University, 2012-12-12 12:34:59.854
URI: http://hdl.handle.net/1974/7688
Appears in Collections:Queen's Graduate Theses and Dissertations
School of Computing Graduate Theses

Items in QSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


  DSpace Software Copyright © 2002-2008  The DSpace Foundation - TOP