On The Link Between Mobile App Quality And User Reviews
MetadataShow full item record
Mobile app stores contain millions of apps which users can download and install on their smart phones. Each app has a page in the mobile app store that includes app's description, a download link for the app, and a space for users to review the app. Each review has a 1-5 star rating and a review-comment. Unlike desktop and server side-software where direct user feedback about software quality was difficult to acquire, app developers now have access to the user's perspective of their apps via the reviews. Since apps with high-rating reviews are downloaded statistically significantly more than apps with low-rating reviews, the insights from studying these reviews are very important. Thus in this thesis, we analyze hundreds of thousands of reviews of Android and iOS apps to help developers understand the relationship between app quality, and the feedback in reviews. In this thesis, we find that low-rated reviews of apps contain 12 different complaint types which have varying impact and frequencies. The most frequent complaints are about functional errors, feature requests, and app crashes; complaints about privacy, ethical and hidden cost issues receive the worst star ratings. For Android developers struggling with device fragmentation, we find that different Android devices give varying star ratings. However, we show that device info from reviews can also be used to identify a subset of Android devices that should be prioritized for testing. Finally, we show that warnings from FindBugs, a static analysis tool, are related to lower average app ratings and the complaints that users leave in the reviews. This thesis shows how studying the reviews of mobile apps can help developers prioritize their testing efforts to address the concerns of their users.