This repository provides supplementary material to the FSE 2025 paper "Understanding Debugging as Episodes: A Case Study on Performance Bugs in Configurable Software Systems", including (1.1) SoftVR training videos, (1.2) the anonymized videos of the debugging sessions (without the audio recordings), (1.3) interview transcripts, (1.4) our fine-grained coding framework, (1.5) the coding framework of debugging episodes, and (1.6) data analysis and visualization scripts. The latter complements the presentation of the study results in the paper and allows for reproduction of our analyses and findings.
This repository only contains 4 study videos as the free plan of LFS does not support uploading all videos (file size limitations). Therefore, we provide all videos (from 1.1 and 1.2) directly in the zenodo archive DOI.
- Docker installed on your system. Get Docker
- Git for cloning the repository
project/
│
├── Dockerfile
├── run_all.py
├── requirements.txt
├── ...
└── data/
├── literature/
├── main study videos/
└── user study/
├── study material/
└── debugging actions data/
We provide the training videos for the user study in this folder.
In this folder we provide all videos of the user study with reduced quality and removed audio.
We transcriped the interview in an 2-step approach: first, transcribing the whole interview automatically with whisper, and second, correcting the two relevant questions by hand. We provide the interview transcript whisper and the manually corrected interview transcript.
We extracted debugging strategies from the literature that we provide in this folder. We provide the resulting fine-grained coding framework per participant including the results.
We provide the goal-opriented episodes, which are the results of the open coding, here. The table shows the participant (participant), strat time of the episode in the video (start_time), intermediate episode name (episode), corrected start time (timestamp), duration of an episode (time_delta), and the final episode name (Episode Code).
The data analysis and visualizatio scripts read in and process all data and generate the figures we showed in our publication. All scripts, as well as the requirements.txt, are located in the root folder of the project and can be executed by eigther executing the run_all.py or by executing the provided Dockerfile:
docker build -t eval-debugging-process-data .
Run the container with the following command:
docker run -t eval-debugging-process-data
After the container finishes running, you can copy the generated files from the docker container to the output directory:
docker cp <container_id>:/app/output .
Replace:
<container_id>
with the correct container ID of the executed Docker container. Find the ID usingdocker ps -a
.