Looking at Execution Logs Beyond Execution Events: Enriching Execution Events to Compare the Behaviour of Large-Scale Software Systems Against Their Historical Behaviour
MetadataShow full item record
Failures in large-scale software systems are often associated with performance issues. Therefore, performance testing has become essential to ensure the problem-free operation of these systems. Performance analysts must understand how their system behaves during performance testing and how such behaviour differs from its historical behaviour (i.e., the system's behaviour during a previous performance test or in the field). However, documentation describing the historical behaviour of a system is rarely up-to-date. Fortunately, execution logs, which record notable events at runtime, are readily available in most large-scale software systems to support remote monitoring, issue resolution and legal compliance. However, execution logs are typically treated as a sequence of events without fully leveraging additional sources of valuable information associated with such events (e.g., the dynamic information within the execution logs and performance counters that are collected during the system's execution). Therefore, in this thesis, we propose approaches for enriching execution events with additional sources of valuable information to compare a system's current behaviour against its historical behaviour. Through case studies on large-scale software systems, including open-source and enterprise systems, we show that our approaches are scalable and can help performance analysts to 1) detect and diagnose performance regressions and 2) improve their performance tests to be more representative of the field.