12-Minute Read: this case study is intended to be in detail.
What is ISZ and what did I do?
Individual Sound Zones (ISZ) by HARMAN is an integrated audio entertainment solution that gives every passenger the freedom to choose his or her own audio entertainment while maintaining a harmonious in-cabin experience for everybody.
Our Challenge is to find the most optimal way to implement the ISZ technology for families in car. Our research scope is from exploratory, interaction, and to validation.
My Role: UX researcher
The Team: Me + 1 MSHCI student from Industrial Design + 2 HCI students from Interactive Computing
Budget: 0 USD
My role is
1. organize and planing
- progress report weekly to Harman International
- set up meeting plans and timelines for the project
- create meeting agenda and lead the meating
- compose interview/focus group/user testing protocols
2. data collection and analyse
- moderate the testings
- affinity diagram analysis
- calculate and analyse the SUS score
- map insights to design implications
More specific details in research....
ISZ stands for Individual Sound Zone, a new technology that allows each passenger and the driver to enjoy their own media content in the car without being interrupted by other people. With this technology in hand, Harman would like our team to find the best way to implement this system with the major concern in user experience.
What is the most effective and user-friendly interaction for our targeted users, families with children, to use the ISZ technology in the physical context of a car?
The significance/the challenge
- futuristic research: this brand new technology has not been in practice. This area is mostly unknown.
- the whole ecosystem matters: not be a screen interaction only. Experience is the key.
Where did our research go?
exploration ➡️ (design) ➡️ iteration ➡️ evaluation and validation
What is included?
1⃣️ expert opinion
2 user attitudes and behavioral studies
3 market research
From space to specific Research Questions
We broke down the research goal into various specific research questions that correspond to different research periods.
Design and interaction
Evaluation and validation
At this phase, our research question would be complex. On one hand, we would like to know if there is any usability issue tied to our specific design. On the other hand, we would like to know if our system functionality satisfies the user needs that we identified in our exploratory research.
We read several articles online and were able to narrow down three main focus for later research.
Depending on who someone is travelling with, a person taking a call on the car’s audio system may not want to broadcast the call to everyone in the car. While the passengers do have the option of taking the call privately on their cell phones, the driver does not.
The infotainment system in a car is the single point of input for the audio output in a car. The same infotainment system must be used whether you want to make a call, navigate or play music. It is a system that provides a convenient touch screen or physical interface to perform the above operations. However, when there is a conflict of intent between users, the interface does not scale up. For example, let’s say a family is on their monthly family trip. Having been to the destination before, the dad suddenly realizes that they may be on the wrong route and wants to turn the navigation on. He goes to do so, but sees that the mom is actually scrolling through a list of songs to select one to play next. There’s the conflict. One of the users has to compromise on their intent temporarily to prioritize the other person’s intent.
Another interesting conflicting situation that can come up is when the different passengers in a car have different concerns at any point. Considering the same example of the family trip, the dad is concerned about navigational instructions, the mom is concerned about playing her favorite songs and the kids at the back want to watch (and listen) to their favorite cartoons. The audio system in the car is a shared system that can only output one stream of audio at a time. So what happens in this case? The music gets put on the car’s audio system. So that’s playing and everyone has to listen to that. The dad has the navigation on which will cut the music off every time there are navigational instructions to give out. And finally, the kids at the back use their own iPad to play movies which outputs its own audio into the shared car space mixing with the audio already being played. This results in, what Harman Kardon calls, “tech audio clutter”.
I used Qualtrics to create the survey for understanding users' current interactions and understanding of the in-car audio experience.
We received 141 responses by spreading out the survey on Reddit and Facebook group.
We want to conduct a semi-anonymity survey to collect data on people's preferences on the interactive audio system when driving or sitting in the car. Our main purpose is not to gain big data through the survey. We acknowledge that the number of people that can be reached out in such a tight schedule will be limited. We will determine the questions to ask and the tasks we shall focus on. This will help us to find the accessible and qualified target users for the following interview or observation. Participants could leave their contact information on our semi-anonymity survey if they were interested in further research. Meanwhile, we will be able to look at the overall intention of people's daily interactions with the audio in vehicles, both from the driver’s and passenger’s perspectives.
Get Good broad sense to start with
We choose to use surveys at the starting point because we have a basic understanding of the ways people in a family live and drive, which also enables us to create proper questions to ask.
The Cost is low: time cost, human resource cost.
The time cost and human resource cost are relatively low if using the survey, compared to interviews. We do not have to track the survey all the time. We can wait until the surveys are spread out and do analysis according to the digital reports anytime (no 48-hour window).
Able to get Quantitative analysis.
Surveys are great to help us get quantitative data. Since we are facing a wide-covered user group, it is significantly meaningful to collect data for quantitative analysis. We need to look at both majority and average behavior.
An easier way to reach out to users.
Filling out a 3 minutes survey lowers the bar for people to participate in our research. It could be hard to directly reach out to each user and invite them to a 1-hour interview. Due to the special form of a survey, we will be able to spread it out to various platforms and easily attract people to click the link.
Can be confidential and anonymous at the same time.
Our semi-anonymous survey allows us to keep people’s contact information on record if they show their willingness to participate in further research. People are free to remain anonymous. This leaves the control to participants and will prompt the authenticity of answers to the questions.
1. Design the Survey
2. Evaluate the Survey
We evaluate separately
After the survey is settled, we assign each member a task. While one member is responsible for inputting the questions into the Qualtric, the other three are responsible to check the grammar, the logic and the contents respectively. Using this strategy allows each member to concentrate on one aspect of the survey.
We ask experts to help
After all, we sent out our survey to Dr. Carrie Bruce, the research coordinator of the MSHCI program at Georgia Institute of Technology, and our client, Harman International. They went through the survey and gave us some suggestions on the wording of the choices. Riley suggests that we should split the “15 minutes - 1 hour” option of the “How long is your usual commute” into “15 to 30 minutes” and “30 minutes to 1 hour”.
We finalized the survey according to their feedback.
3. Spread the Survey
We spread out our survey mainly via social media platforms. Originally, we asked Harman International if they have available customers that could be helpful. However, the only resource they had was their professional sound testing participants. These participants are perfect in testing the hardware technology, but they are not necessarily our targeted users, parents with children. Therefore, we decided to find
a. Adults who are possibly in a marriage
b. Adults who care about their in-car experience
After we had spread out our survey in the Reddit car-related group, we have reached over a hundred responses. However, when observing our results, we found out that over 80% of the response is from male users. Thus, we targeted Facebook groups that are related to the keyword “Mom”, hoping to collect more mother perspectives of the in-car audio system.
Here is the overall list of groups that we have spread our survey to:
Mom helps Mom
Online Grocery shopping
Harman Kardon Fan Club
Our report suggests that only 12% have never used their own device. This fact grounded our assumed needs of the individual sound zone because ISZ can be a better solution for passengers to consume their own choices of media content than playing on their own device.
Nearly a half (48%) of the participants have the experience of using their phones to entertain themselves in a car. We may consider implementing our design with phone apps according to the fact that the phone is the most commonly used device.
Around 70 percent of participants(n=92) chose that their usual commute time is less than 30 minutes. This means when coming to the solution, we should consider short term driving in the first place. The final design artifact should be intuitive which will not take user’s too much time before they get to their media content. In a short trip, they will not have time to deal with complicated interactive systems.
We also wish to dig deep into the long trip context in the interview: How is it different from the normal daily short commute? More specifically, we want to know how people's need for in-car audio content changes based on the trip’s length.
I reached out to 4 parents, including both driver and passenger roles from the survey results pool.
We conducted semi-structured interviews and have collected over a hundred notes for affinity mapping.
To develop a clearer understanding of the different user preferences & typical behaviors of drivers and passengers in a car
To understand the physical and emotional environment in a car when user groups such as families travel together
To understand the role of privacy in a shared space
Explore the pain points of the existing in-car audio system
To seek findings from users’ experience with similar products
We conducted the surveys to understand the breadth of the problem space, demographic information, and driver/passenger details such as family members, typical commute times, audio preferences, etc. We were able to gather quantitative data on our problem space using the surveys. Getting survey results ensured that we have a large enough representation of our target audience. All the survey questions were multiple choice-based, which would lead to predefined answers. In order to get in-depth research insights, we conducted 4 in-depth semi-structured interviews. These interviews were conducted virtually and lasted around an hour long.
It is also crucial for us to understand the context of the car as a shared media space for families. While we knew some of the questions that we wanted to ask, we also wanted to empower our research participants to share their stories and experiences. This requires the interview to be semi-structured.
Due to time constraints on our deadline to submit this report, we were able to conduct 4 interviews. Further, due to the pandemic, we were not comfortable with observations or contextual inquiries as they would be long periods of time in an enclosed environment.
While the participants are evaluated on the same set of questions, it is also possible to ask in-depth questions when required
Interviewing direct users leads to the best insights
The validity of data is high
Helps build a rapport with the interviewee
In our Survey, we had left a question for respondents to add their contact details (phone, email) if they were open for further interviews. However, we wanted to screen our interview participants in order to ensure that the evaluation criteria were the same. Hence, we decided against asking interview questions to the survey respondents - which were 200+ and recruited other participants.
We interviewed 4 users, each user was interviewed by one person in the team. To address the questions fully and inclusively, we targeted users from different backgrounds.
We did affinity diagrams and from there we created the personas and empathy maps for our users.
Design and Interation
Task-based User Testing
I tested 3 groups of users(in a total of 9) with paper prototypes and wireframes.
I led the testing by generating the plan, renting the model, writing the script. I was the moderator.
We mapped the finding with design implications.
It is extremely challenging for us to conduct the evaluation session because our design is for a specific environment and it involves all the interactions happening there. Screens are not the only medium that participants ought to interact with. They will also be encouraged to interact with handles, sensors, and other participants orally to simulate the real in-car experience.
We conducted our evaluation sessions in the test car available in the Industrial Design Lab. We were keen to replicate the physical context of a real car, so the participants could understand the physical space and the concept of ISZ clearly. Additionally, some of our design ideas involved our sketches being placed on the internal spaces of a car, hence the availability of the test car was greatly useful for our evaluation session.
For two of the three design concepts we used paper prototypes and for one of them, we used a digital prototype using Figma, which we cast onto our devices using Figma Mirror.
In our design, we have come up with 4 main components - screens, the ISZ app, voice interaction and in-car accent lights. There are 3 screens placed in the car - one is the main dashboard screen and the other two screens are placed on the seats in front of the backseat passengers. ISZ has it’s own app that is used to control a user’s sound zone.
We designed an NFC connected sensor for an easy pairing process with the device and the zone. The screens will be the most direct way to interact with the system as everything is visible. Buttons on the steering wheel and the voice interactive assistant will help drivers manage the multi-tasking when coming to safety in driving. We also use accent lights that line the interior of the car to indicate the customizable sound zones.
Evaluation and Validation
Expert Based Heuristic Evaluation
I invited the expert in vehicle design, a previous worker on BMW's individual audio system, and the UX expert to interact with our digital prototype in a model car.
We asked them to identify usability issues according to Nelson's 10 principles and rate the severity.
Our goal by conducting an expert-based heuristic evaluation is to opt out of any usability issues. Through this way, we can finalize our user interface and user experience flow which can be better tested when in user testing.
Conducting an expert-based heuristic evaluation would largely save our time because experts spot problems more accurately than the user does.
It is clearly guided by design principles, so there is a smaller chance that the evaluation session out of control
It is efficient because experts are likely to provide more valuable advice condensed in a session
Given that we are designing a whole in-car audio system, we would like to involve the experts both in UIUX design or vehicle related industrial design. Through this way we can have an all around perspective on our designed system.
Have been practiced in human-centered design work for years
Is familiar with Human-centered design guidelines
Have relevant experience in vehicle works
Moderated Virtual User Feedback Session with SUS Survey
I and my team created a demo video for users to understand the system in a virtual session.
We asked for users' feedback on specific interactions and fill the System Usability Scale survey.
I calculate the mean, mode, and medium SUS scores: 76, 77.5, 72.5.
In this session, we aimed to collect the real user’s thoughts and opinions on the current design. Since the whole system is huge and there are too many details in every interaction, we will focus on several specific modules(will be introduced below). Besides their qualitative feedback, we also want to synthesize a quantitative score using a SUS survey. Via both way, we would have an overall sense of how well the system is performing and what things need to be improved for what reason.
Remote or in-person: We would like to do it remotely because of the special case of the Covid19. Moreover, since our target user is family members, it can involve more issues when facing the whole family, including the children, face to face. We do realize that it can be hard to mock up the in-car scenario when taken remotely. To solve the problem, we decide to make a demo video that will allow users
Moderated or not moderated: We decided to do a moderated session because our designed system is too huge. Our system involves two separate series of screens, the app, the buttons on the steering wheel, the sensors, and lights. It is impossible for us to mock up everything virtually. (We cannot even do that in person, in the given time and cost constraint) Therefore, it is better to record a video and directly zoom in to specific interactions. It is also beneficial for us to get direct answers to the part we would like to focus on.
Participants are capricious regarding arrangements.
During the testing stage, we reached out to multiple participants to arrange testing sessions. While some participants did show up on schedule, some of them just told us they need to reschedule their session on short notice. This actually happened many times. Since we are good on progress, this didn’t cause too much trouble for us. However, it did remind us to contact participants at an early stage in the future. This could be an effective countermeasure against something unexpected. Another way to deal with this is to always have backup candidates in case some formerly recruited participants can not attend anymore.
Devise a plan that matches the fidelity of the prototype.
At first, we hoped to allow participants to freely explore the functionality of the prototype during the expert evaluation sessions so that they could get a whole picture of what our prototype feels like. Essentially, this was a nice idea to give our experts full control of the system. However, we neglected one major issue which led to some trouble during the evaluation. Since our prototypes are made by Figma, in order to achieve the above-mentioned free control, we had to create numerous connections between buttons and modules on different pages. That was actually an extremely difficult and lengthy job to do. Undoubtedly, we made many mistakes inadvertently when creating links between pages. As a result, after participants clicked on a button or module, either they were directed to an unrelated page or nothing happened. In both cases, participants faced some confusion over these situations. This made us realize that our initial evaluation plan might not fit the way we built our prototype. Instead, we figured that we could just walk our users through some major flows, which would not only ensure a smooth evaluation session but also save us a lot of time. A testing session with full user control might be more appropriate for prototypes that have been developed. The key here is to pick the most suitable evaluation method based on the fidelity and nature of our prototypes.
Follow a consistent evaluation procedure
Since we got eight evaluation sessions to conduct in total, it was really impossible for everyone to attend all sessions. In response to this issue, we divided our sessions among team members so that we could finish the testing work simultaneously. In order to better analyze and manage our data, we introduced a common evaluation procedure for everyone to follow. As a result, the final data we got was different in content but similar in form. This greatly helped us understand and organize our findings in the analysis stage. A lesson we learned from this process is to introduce some consistency and cons when collaborating with other researchers in the future.
Consulting experts at the beginning of the project
This is something that may not always be possible or recommended but it can be super helpful in some situations. Especially if operating in a domain that is quite complicated to understand. Performing a competitive analysis gives us knowledge about other systems that target the same problem space. That is one way of looking at the problem. Another angle is to consult an expert and understand the constraints and other widely known knowledge within the problem space that can help us design. It is as important to understand what is not possible as it is to know what is possible. For example, within our project, we had to be cognizant of the fact that certain spaces on the car just cannot be used for anything like the middle of the steering wheel since the airbag is installed there. Additionally, experts can also give good suggestions on approaching the problem due to their experience in the space. For example, during one of our expert evaluations, we got a lot of valuable feedback on designing interfaces for the physical spaces. The expert recommended evaluating the possibility of using texture, material and form to create the interfaces. This kind of feedback and suggestions earlier on can benefit the project greatly in the longer term.