This is part eight in my series about the Human-Computer Interaction course I took through Coursera. Read all my posts for the full story.
Assignment 6 User Testing - Results
After we ran three or more user tests following the procedures we created we had to analyze our observations.
Findings
Issues with study protocol
- Due to the time constraints of the class we had around a week to schedule and conclude all user tests. Unfortunately it rained most of that time. Given more time I would have rescheduled several of the user tests to non-rainy days. The rain prevented 3 out of 4 tests from being done outside at a gas station comfortably.
- All of the participants wanted to "talk aloud" through the process, unprompted by me, which meant the task times are less useful as a conclusion technique.
- The wireless network at several of the locations was slow causing task times to increase. Load times on pages were sometimes long enough to be confusing or lead users down the wrong paths.
Issues with interface design
- The addition of a "manual entry" option was requested by several participants in their questionnaires. Immediately after filling out the questionnaires participants were shown the redesign which does include a manual entry option.
- For task 3 "select a different repair shop" every participant went to the maintenance page first. One tried the My Car page when the maintenance page didn’t work. The actual place to change the repair shop is on the settings page. All participants eventually found the page via trial and error.
- The "take a picture" screens that are supposed to emulate the "take a picture" interface on an iPad or iPhone were confusing to most participants.
- 3 out of 4 participants all had issues with swipe gesture. The participant running the test on a laptop and the participant with an older model smart phone struggled the most. One participant got it to work right away.
- All participants self-rated individual tasks as 1 or 2 on scale of 1 (easy to understand & complete task) to 5 (very difficult or confusing to complete task)
- All participants use computers and/or smart phones daily and yet all track, if they do so at all, vehicle maintenance and mileage information on paper.
Changes Going Forward
Changes to study protocol
- Task times could be a valuable measure in a later study if load times could be controlled. The app could be saved locally to a test phone for participants to use. Additionally, task time of first use could be compared to time to complete same task again to see how fast the user learns the system.
Changes to interface design
- The test we were able to run outside led to the conclusion that the option to start the fill-up photo sequence with either the fuel pump or the odometer is a vital user control option to include. That’s not obvious until you are standing next to the fuel pump when you begin to enter data and it asks for the odometer photo first. Being able to choose the option most convenient at the time will help the task be more efficient.
- Include a manual entry option in the fill up section to allow for more user control, both in terms of how and when data is entered. Potentially there could be another option to photograph the receipt from the fill up instead of the fuel pump. That way the user could still use the photo-as-input-device feature but would not have to stand in the rain to take a picture of the pump.
- Since every participant tried to "select a repair shop" by going to the maintenance page this is obviously something that needs to be addressed. Future prototypes could try: (1) including that setting on the maintenance page or (2) leading people to the settings page from the maintenance page with an explicit link such as a button saying "change default repair shop." In next user study include a prototype for each of these two methods and compare them to determine which would be better.
- I’m not sure how to best handle the confusion about the "take a picture" screens in the prototype. Is it possible to mock-up that functionality more clearly in Axure? Or is it something that has to wait until the programming stage to really handle well?
- I'm also not sure how to address the swipe gesture issues. Perhaps Axure’s templates for the swipe gesture are not as forgiving as they need to be as I followed the directions in their tutorial precisely. Or issues with swipe gesture could be related to older touch screen technology? The one participant who was able to get it to work right away loved it so it’s definitely worth keeping the functionality through better working iterations in order to get a proper test.
And that's the end of Stanford's Human-Computer Interaction course assignments.
To continue to develop this app I would repeat this redesign/modify and user test cycle beginning with implementing the changes mentioned above.
I hope you learned something valuable from this experience just as I did.
Any questions? I'd love to hear your thoughts!