-
Notifications
You must be signed in to change notification settings - Fork 3
phaseIII
The overall approach to this study was a formative usability evaluation designed to identify design flaws and areas for improvement in the prototype. We utilized a "Think-Aloud" protocol, which required participants to verbalize their expectations and frustrations while interacting with the system to capture real-time qualitative data. Each element of the protocol was designed to reveal specific components of the User Experience (UX), starting with background questions to establish the users' mental models and prior experiences with manual or digital study materials. The evaluation consisted of three primary tasks: Task 1 focused on the utility of scientific content entry, Task 2 tested organizational flexibility with external resources, and Task 3 evaluated the efficiency of premade templates versus the normal generation of a cheat sheet. The study included N = 6 participants, all of whom were students currently enrolled in a usability engineering course, and used a functional prototype as the primary stimulus.
The summary of our data reveals a consistent trend toward successful task completion, with participants generally finding the interface straightforward. Qualitative feedback during the "Think-Aloud" sessions indicated that users perceived their cheat sheet quality to improve as they spent more time with the editor. Quantitatively, task difficulty and satisfaction ratings were high, typically falling in the 4–5 range on a 5-point scale, suggesting strong perceived ease of use. Across the evaluation sessions, the average time spent was [INSERT AVERAGE TIME] with a standard deviation of [INSERT STANDARD DEVIATION]. A full link to the anonymized data collection spreadsheet, which contains no personally identifying information, can be found here: Cheat Sheet Generator - UX Data Collection
The interpretation of these findings suggests several recommended changes to the prototype, specifically regarding the integration of images and graphs, which some participants found difficult to manage. Additionally, the data indicates a need for more granular formatting options, such as providing support for custom equations and a wider variety of font choices. However, the study affirmed that the core layout generation and the organized output should remain as-is, as participants generally found these features to be more efficient than traditional study material preparation methods.
There are certain limitations to these conclusions, including a small sample size of six participants, which may not capture all potential edge-case usability issues across a broader user base. Additionally, because the participants were drawn from a usability class, they may demonstrate a higher degree of expert bias or technical awareness compared to a general student population. This could result in observations that are more focused on design heuristics than the experience of a casual user. Furthermore, because the study was conducted in a controlled setting with tasks specifically focused on scientific and mathematical content, the results may vary for users working in less structured academic subjects or in more distracted environments.