A Case Study in Usability Testing and Reporting
A project for the Concepts and Practices in Usability Testing course at California State University, Fullerton, students were tasked with constructing screeners and test moderation guides that focused on a particular website or feature, which would then be used for conducting in-person and remote test sessions with users of the target website. Students would analyze notes and data points gathered from these test sessions and create a findings report, which highlights potential flaws in the websites design and suggests fixes or alternative solutions.
The open ended nature of this project allowed me to select any desktop website to test for potential design flaws. Wish.com, an online marketplace for off-brand products, wasn't a website I had used myself prior to this project, though it's unconventional design as an eCommerce site seemed to lend itself to potential flaws in usability. After spending some time with the site's structure and interface, I identified the following key areas for testing:
Are users comfortable with signing in/creating an account when they visit the site?
How do users typically search for items of interest on the site (search bar, categories, etc.)?
Can users find clear and concise reviews about a product they are interested in?
Do users have any difficulty purchasing the item(s) of interest?
How credible do the users find the service?
Based upon these 5 objectives, I developed a full test plan with core tasks and steps that would be common for the average user of an eCommerce website. These focused primarily on actions like searching for an item by category and search term, making use of the "wish list" feature, shopping based on ratings and shipping speed, and using additional features that are specific to Wish.com. like "Blitz Buy".
User tests were performed in person on a laptop with pre-made wish.com credentials that would be provided to each participant. Morae was used to record the faces and voices of each participant, as well as their on-screen interactions, for post-test analysis. As the Moderator for these test session, I would sit close by with my test plan, paying close attention to each participants actions and body language, while encouraging them to use their "inner monologue" while testing. With liberal use of probing questions, I was able to uncover much of what was going on in each user's mind as they performed the core tasks I had laid out in the plan.
Once the test sessions were completed and the data gathered was analyzed, the overall results were placed into a finalized "Usability Report", where I highlighted specific components that proved to be problematic for most of the participants. The report itself reviewed my test goals, testing environment, and the methodology I used, to ensure that all of the appropriate context is in place for any potential reviewers of the report.
The findings themselves were highlighted in the report with annotated screenshots and a brief overview of the issue that was discovered. These notes were each followed by a "Recommended Action" that served as a potential solution to the issue that was being reported.