Team: Pranav Nair, Swar Gujrania, Serena Tan, Jordan Chen
Tools: Photoshop, Illustrator, Adobe XD, iMovie, Sketch
Research Methods: Semi-Structured Interviews, Contextual Inquiry, Prototyping, Interface design, Usability Testing, Remote user testing
Contributions: User research, Concept Ideation, Interaction design, Ethnographic research, Survey design, Journey Maps, Personas, Empathy Maps, Interface design
“Focus Brands is made up of independent brands/units that are all targeting the similar consumer groups. Research is being conducted that is redundant while simultaneously decisions are being made without any research. Focus Brands would like to figure out the best way to create a repository that can easily be accessed across the company for employees to find data related to potential business decisions.”
-- Source: Focus Brands Kick-off presentation
Currently, at Focus Brands, research is shared across a multitude of platforms including shared drives, SharePoints, file sharing platforms like Dropbox, and even e-mail, which leads to a lot of confusion especially in cross-team interactions.
Why is this a problem?
1. Time and money is spent looking for research that currently exists but is not easily accessible
2. Disorganization can lead to miscommunication and costly mistakes
3. Makes on-boarding new employees to the current infrastructure of research difficult
4. Information ownership exists in a transient space where knowledge keepers come and go
When we started gathering data, the first thing we did was ask for an organization chart from our stakeholders to better understand the roles and team structures of our users. This gave us a better sense of the different workflows and contexts that we would be observing in our data gathering activities.
This helped us determine the people we wanted to recruit for our initial ethnographic activities. In order to come to a strong solution, we first needed to understand how research is currently carried out at FB. Through our data gathering process, we would identify key behaviors that users utilize in their current workflow. Our aim was to then use all of the information we gathered to come up with a solution that could positively impact the process of finding relevant research across the company.
1. Understand the current state of research infrastructure across Focus Brands
2. Identify key behaviors by users in the research process
3. Propose a solution that can be implemented across the company to ease the process of finding relevant research
So we went out and learned all we could about research at Focus Brands through a combination of task-based observations and semi-structured interviews with their employees. We observed how users currently interact with the research infrastructure in their daily work and we asked them questions to learn what some of the thinking process behind their behaviors were in their interactions.
Data Collection Methods:
1. Remote Observations:
We initiated our data collection process by requesting a remote observation of a user navigating the current repository to gain a better understanding of what ‘research’ meant to Focus Brands in context. By observing people going through the existing system we hoped to:
○ catch a glimpse of the how people interface
○ the tasks they engage in
○ the goals they try to achieve
We hoped to use the tasks observed to conduct a preliminary task analysis and performance estimations to establish a baseline. Additionally, we believed the remote observations may highlight implicit constraints that influence a users behavior. E.g. In some companies IT restricts access to certain folders that have remained inactive for a certain period of time. As part of the remote observations, we requested our user to think aloud as they went along interfacing with the current systems. This request was made to provide more context to the observations by understanding user’s motivations behind performing certain interactions with the system. Since both activities were conducted with a singular participant, our primary aim had less to do with gaining insights and more with learning enough about the current context in which the repository operates. This context would better inform the questions we asked while designing our semi-structured interviews.
2. Think Aloud Observations:
As part of the remote observations, we requested our user to think aloud as they went along interfacing with the current systems. This request was made to provide more context to the observations by understanding user’s motivations behind performing certain interactions with the system. Since both activities were conducted with a singular participant, our primary aim had less to do with gaining insights and more with learning enough about the current context in which the repository operates. This context would better inform the questions we asked while designing our semi-structured interviews.
3. Semi-structured Interviews:
Following the remote observation provided by the project representative from Focus Brands, we realized that we needed to gain more insight into the different kinds of users who interact with the current research storage system. Since user needs and expectations may vary between users who occupy different roles, we want to make sure we capture as much breadth of the research experience to incorporate into our eventual design.
With the help of our project liaisons at Focus Brands, we were able to arrange for six 30-minute interviews with users occupying various roles in different brands across the company. Interview subjects included brand managers, growth initiatives directors, marketing directors, licensing directors and social media managers. Interviews were conducted at the Focus Brands headquarters and were supervised by the Focus Brands representatives. Each interview was led by one interviewer with notes taken by a designated notetaker. The interviews were also recorded by video and audio with written consent from the interview subjects. We were also able to record users’ screen to better understanding their interaction.
After gathering our data, we took it through a series of analysis techniques to extract insights.
1. Task Analysis
The fact that the current research repository has different platforms results in users having different ways of utilizing the platforms. We chose to do a hierarchical analysis because it helped us better identify the common and necessary steps users go through to achieve their goal in retrieving and sharing research files. This method allows us to understand users and platform’s requirement in each step which helps us to start identifying users’ need and pain points.
We analysed the six users from our semi-structured interviews and distilled the following common steps for all users:
Retrieving the relevant research files
If location is known, then navigate to the files
Stored or saved the files themselves on Sharepoints and Shared drive
Receive the path of the files on Sharepoints and Shared drive via email
Have the directly emailed files from others
If location is unknown
Go through folders in different platforms in the research repository
Navigate through the folders
Guess the location
If still not able to find
Ask relevant people via Email
Ask relevant people in person
Working on research files
Analyzing research files
Editing existing files for use
Creating new files
Sharing research files with others
Send out files via email
Upload to Sharepoint or shared drive
Choose which folder is suitable for the files
Create new folders for the files
User may choose to email out the path or location of the files to relevant people
Do not share, save on personal drive
2. Affinity Mapping
To consolidate all the information we had gained from our remote observations and semi structured interviews, we decided to generate an affinity diagram. By converting our handwritten notes into post-its, with one clear observation per post-it, we hoped to identify themes or pattern sof how people interfaced with the knowledge management database (saving, sharing, searching, organizing etc.). These themes would then assist us with the classification of pain points, and understanding how a single pain point influenced user behaviour across multiple themes and gain a deeper understanding of the underlying problems that plague the current system.
In order to gain a better understanding of our user group, we began to map out the data that we captured, including quotes, thoughts, feelings, and actions of people we interacted with. However, during the process, we realized that a single empathy map was too restrictive for the variety of people that we’d interacted with. There appeared to be a clear distinction betweenpeople who prioritized organization over speed and vice versa, that allowed us to create separate empathy maps for each category.
Empathy maps were organized as follows:
People who prioritize speed over organization
People who prioritized organization over speed
Our empathy diagrams made us realize one important detail, regardless of the type of intervention we introduce we were unlikely to change the behaviors of the "not so organized" type. This also had to do with the fact that we firmly believed to change behaviors of both kinds of users would require guidelines which were enforced in some form company-wide.
Given this, and the time and scope of this endeavour, we decided to focus on trying to address the pain points of the users that were inherently "organized" in their storage and access of research artifacts within the current system. We further proceeded to develop personas for the organised users into the three distinct personalities that we were able to identify.
1. Lack of existing guidelines creates disorder
2. Interpersonal process of finding research
3. Managing and controlling access to files
4. Juggling between platforms can be extremely cumbersome
5. Understanding file context and relevance was key to our user's needs
Based on our findings, we created an initial set of concept sketches for feature ideas to address some of the common pain points we found. Below you can see a few representations of our early ideas. We explored interactions for searching for files, organizing files, and even thought about a dashboard interface with data visualizations and notifications for reporting weekly usage.
These concepts were taken into feedback sessions with Focus Brand employees where we walked them through each feature and asked for their thoughts and impressions. Using their feedback, we refined our ideas into a single wireframe prototype which we once again brought into another set of remote feedback sessions with users to understand where we could make additional improvements. Here is what we came up with:
Feedback on initial concepts:
1. Gave our participants an update of our progress on the designs and walked them through the sketch concepts.
2. Asked them to communicate their thoughts & impressions during the walkthrough.
3. The participants were then asked post study interview questions
4. Instructed them to rank the features based on the usefulness to the participants
5. The team members got together and analyzed feedback to identify design improvements that could be implemented.
Based on the feedback we received, our was solution to create an add-on pack of features that would fit into existing platforms in use at Focus Brands. We wanted to utilize the existing infrastructure to eliminate the difficulty of learning interactions on a completely new platform. Here you are seeing how a user would add files and folders to the platform. They are presented with a window where they can tag the file to provide some brief contextual information and strengthen search results and suggestions by the system. They can also specify where the file should live and who should have access to that file.
User Study Design:
We ran evaluation sessions with 4 Focus Brand employees in separate 30-45 minute sessions. In each session, we conducted a task-based cognitive walkthrough where we presented the participant with a specific task to complete within the prototype. As they went through the process of completing the task, we asked them to voice any thoughts or opinions they had in order to get a sense of the user’s thought process in relation to interacting with the prototype. We also asked them a few questions about each screen of the prototype to identify any potential areas where we could improve communication. After each task, we had the participant fill out a NASA TLX survey to rate how taxing the participants perceived it was to complete the task.
Once all tasks were completed, we also had the participant fill out a System Usability Scale survey to try to quantifiably rate the overall usability of the prototype.
30-45 minute sessions
Task-based Cognitive Walkthrough
Think aloud protocol
TLX survey given after completion of each task
SUS survey given after completion of all tasks
Link to Interactive Prototype:
Here's a link to the interactive prototype we used for our usability evaluations.
1. Improve communication on the tagging window
2. Fix scaling on search results page and include sorting by columns
3. Provide recipient of an access request with a window to elaborate on why they are rejecting the request
4. Improve communication of auto-organizer functionality
5. Clarify when the auto-organizer has completed the suggestion process
In addition to the feedback we received in the cognitive walkthrough sessions, we had some additional insights from the TLX scores we collected.
First off a brief explanation, TLX indicates the demand to complete each of the four tasks. A lower score indicates a lower demand for that task.
Right off the bat, we can see the auto-organizer was more taxing to understand, potentially because it’s a new feature and because of the communication issues for how the auto-organizer functions. We also see with the performance score that people feel they are less successful in completing the file tagging process. Finally, Access scores were especially low, indicating the workflow was an easy method for requesting file access
Looking at our SUS scores, it’s clear that there’s room for improvement in the usability of these features. Typically, a strong SUS score is above 70, but we were below that threshold with half of our participants.
The prototype we shared with our users scored an average of 71.875, which is slightly higher than the average SUS score of 68. At face value, the average score for our prototype indicates that our system has a reasonably good overall usability. However, it is important to note that the score was calculated from a very limited sample size and therefore, individual scores carry more weight in influencing the average SUS score. Specifically, User 1 rated the usability of the prototype highly at 87.5 which pulled the average score slightly above the threshold score of 68. 2 out of the 4 users surveyed rated the prototype below 68, indicating clearly that there are improvements necessary to bring our proposed system to a usable state prior to implementation.
Although our sample size of participants was small, the scores gathered through the SUS and TLX surveys show that there were necessary improvements to be made across the entirety of our prototype. Our analysis of the data gathered through the cognitive walkthrough sessions helped guide our attention to specific issues and improvements that should be made in future iterations of the prototype.
Our recommendations for improvements to our proposed system are listed below:
1. Unclear controls in the file tagging screen
Users seemed to be confused about the “Add folder location” and “Add people” controls in the bottom, as they cannot understand the meaning of these two controls instantly. In addition, the button for “Add” was not clear to the users whether it is for confirming the files/folders that they’re uploading or confirming the tags they’ve chosen.
For the “Add” button, we plan to change the name to make it clear if it is “Add tags” or “Upload file/folder”. For the two controls in the bottom of the window, we will also change the name to something more representative of the actual respective functionalities, such as “Manage File/Folder location” and “Manage Access”. It may also help to add some explanatory texts to demonstrate the purpose of each option.
2. Hierarchy for search options
Many users reported that they valued brand more than any other search options. Also, the category ‘Brand’ is not sufficient to classify each file or folder in the system, since some relate to central departments instead of individual brands.
Rearrange the search options to match users’ preference. Based on the feedback we gathered, it is necessary to move the “Brand” option to the top. In addition, expand the major category of ‘Brands’ to include both Brands and Departments to make the context more clear for the user.
3. No searching habits
When we asked the users to search for a file, many users would ignore the search bar on the top left. Instead, they wanted to go into the folders and try to find the files. Although we introduced this new search feature into system, it would be ineffective if users are not using it.
Users need proper training and acknowledgement of the search functionality in the new system. An easy on-boarding demonstration would be helpful for users to learn about the search engine.
4. Lack of group sharing
Participants expressed interest in being able to share files with not just individual people, but also groups of people, as they often need to share files to the same group of people in their brand.
Adapt the design to include the ability to add groups of people, apart from individuals and making the options visually distinct from one another.
5. Lack of preview for folders
The folders lack a preview functionality, unlike files in the system. Folders, albeit different from files, afford certain features that parallel files’.
Design and interaction should be consistent for files and folders for these common functionalities. This includes, having a preview and metadata display for folders too.
6. Tree structure was foreign to the users
Users were not clear on what the folder tree structure is representing in the auto organizer. Since users had never used or seen the dummy folder structure we created in the prototype, they were not aware that the diagram shows the existing folder structure of the selected folder.
Add some clarifying text near the folder structure for users who are not familiar with it.
7. Unclear option controls for auto organizer
Similar to the tagging screen, users were not clear on the functionalities of the “Organize subfolders” and “Archive old files” options. We were also vague about the definition of “old” files. Users expected to know the threshold for files to be considered ‘old’ by the system. The toggle switch for “Move actual files” was often ignored by the users and when asked, users expressed confusion on what this option would control.
Better communicate the options by changing the names for the controls and adding explanatory texts for each control. For the archive function, the system should give users more customization options, such as including the information about archiving threshold and giving them an option to set their own threshold. In addition, an onboarding session for the auto organizer is recommended.
8. Confusing progress chart
User were not sure what the progress pie chart indicates. They were not sure whether it is for the progress of moving the file or the system giving out organizing recommendations.
Adding text to make it clear of what the progress chart indicates. In addition, only associate the progress bar to the screens where it is related, i.e. only show the chart on screens where the systems are giving out recommendations. Removing the progress chart from unnecessary screens can help reduce user confusion.
9. Need for more manual control
For now, the auto-organizer will only give out notifications and history review to relevant users, but it does not allow users to report any problems they see in the history review. Users expected more manual control in the organization process.
Incorporate different ways to manually override system’s suggestions and making sure that the system learns from these instances and improves its recommendation model. Also, the system should provide an undo functionality and corresponding notification features into the system.
10. Lack of priority for requesting access
For now, the auto-organizer will only give out notifications and history review to relevant users, but it does not allow users to report any Users have the need to gain access of a file/folder in an urgent manner. However, the current system does not provide an option to change the priority level on an access request to demonstrate urgency.
Allow users to set different priority levels for their requests, giving them the ability to mark a request as urgent, if required.
11. Missing reason of rejecting for requesting access
Users were concerned about what happens after they hit the “Reject” button, as:
They want to give reasons why they reject the request.
If they were the requester, they would want to why they’ve been rejected.
We will introduce a text box for the recipient of an access request to write a note describing the reason for rejection.
Based off all of this feedback, we can address some of these issues with these improvements.
1. All of our data gathered in the evaluation sessions showed that communication could be improved across the board. So we can rethink how we communicate functionality across all of our labels, buttons, and icons.
2. We need to make the search result window more legible, so we can play around with the organization and display of elements on that page
3. With the interpersonal component being so core to the research process at FB, it makes sense to provide the recipient of an access request with a way to communicate their decision to the requester.
4. Finally, it was evident that we need to work a lot more on the auto-organizer to make that interaction easier and more intuitive for users.
First off, we will need to implement the improvements we identified from the usability evaluation sessions. We also need to think through in finer detail the actual algorithms that would power functionality of this solution. Then we would need to conduct additional user testing with a more functional prototype across a larger sample size of user to strengthen the findings from our quantitative methods on a more realistic interactions. And lastly, if we were to look at implementing this at Focus Brands, we would need to establish a set of standard practices and guidelines for Focus Brands employees that is in accordance with current IT policies to ensure we are not infringing on any structures set in place for the safety and stability of the current network of information.