As an intern at Connected Lab the winter of 2016, I had the chance to conduct the company’s first user testing of an internally developed multi-platform app. Before I started, work on the project had included initial research, design work, and a beta version of the app from engineering.
The one missing piece? Putting this system in the hands of users outside of the company for feedback.
about the app
The app to be tested was a music playlist solution for groups, leveraging Apple TV and iOS. Members of the Connected Lab team recognized the opportunity to provide a more social music sharing experience by allowing multiple guests using the iOS app to add music to a shared tvOS music player in real time, leveraging SoundCloud.
My personal goal for the project was to encourage more dialogue between the design and engineering team. It was important for this project to improve the user experience by sharing a common language as a team that was user-first.
individual interview + testing
Five external participants were recruited through Facebook and direct recruitment to test the “host” experience of the app.
Each Interview and Testing was set for one hour, and involved interaction with both the tvOS and IOS app to select and play a song using the program. More general experience and opinions were gathered as well.
Approximately eight internal participants (unfamiliar with the project) were recruited to test the experience and functionality of the IOS app.
The group “testing party” took place for a scheduled one hour, and involved participants engaging with the app in a natural way during a party situation (playing ping pong with snacks and refreshments). More than one phone was used to simulate multiple users at once.
Quick Finding: Source Clarity
Individual testing revealed that people were confused by the selection of music available in the app, in large part because they didn’t know the source was SoundClouds' music library. The team hadn’t realized how important this information would be to reveal for how the effective the design of the app would be perceived.
Quick Finding: Queue
One piece of feedback our team received during group testing was frustration surrounding the order that the queue was filled, with newer songs being added at the top of the list. The team had typically tested the feature individually, so hadn’t seen the friction this simple interaction created.
For each individual user test I acted as moderator with a semi-structured script to prompt the tasks of adding a new song to the playlist from both the tvOS app and the iOS app. Present for each interview was a note taker from the design team, and a rotating member of the engineering and management team for the project. It was a priority for me that everyone working not the app have a first hand experience viewing user interaction to build team empathy.
The afternoon following our last interviews, the team of engineers, designers, and myself came together to organize our findings with an affinity diagram. Individual quotes and “sticky areas” noted in our observations were recorded as positive, negative, neutral/ questions. As a team we sorted these observations into affinity groupings, including like tasks and platform related challenges.
The benefit of including all members of the team was in forming a group understanding of user needs to focus on creating a human-centred experience. By participating as a team, we formed a strong collective understanding of the results of the testings.
Areas of Impact
By discussing the feedback, I helped identify the six most predominant areas for significant impact in experience for the user if improved. These areas of impact could be separated into two larger groups; of functional components of the app to improve, and improvement of communication surrounding the app. By focusing on key usability components (primarily in the tvOS app), and by changing the language surrounding the app, we improve the experience significantly with the time we had available.
On our last day of our research sprint the entire project team met for a final meeting. By presenting my core areas of impact, we discussed the specific user experiences that we had observed, and more generally the topics and how they could be solved. The team collaboratively developed a list of actionable items to target specific areas of impact, and I led a conversation to organize the items by “effort vs. effect on experience” to ensure that the time spent would meet the most user needs.
Summary report documentation
A summary report was created to present for our weekly team meetings to discuss what research had been done and the decisions that the team had come to. It helped illustrate the main research findings, and helped encourage dialogue as to what worked well and what could be improved on for further research projects.