Team: Pranav Nair

Tools: Photoshop, Illustrator, Adobe XD, Arduino CC, Swift, After Effects, iMovie

Research Methods: Literature Reviews, Design brainstorming, Participatory Design Workshops, Usability Testing, Surveys, User Interviews, Quantitative Data Analysis

Contributions: This project was completed as a Master's Thesis and was executed individually.

user study.png

In this thesis I investigated whether providing directional alerts, i.e. direction information about an oncoming point of interest, to a passenger’s active screen can augment their ability to regain situational awareness when traveling in a semi-autonomous (Level 3) vehicle. To this end, I built 2 prototypes for said alerts, one located at the center of the user’s attention and one at the periphery.


I evaluated them in a driving study conducted in a simulated lab environment. Although I found no significant differences in reaction times, participants perceived performing better when provided with directional alerts. Findings from my study suggest directional user interfaces have the potential to reduce overall cognitive load and lead to better user experiences for passengers of self-driving vehicles.

MID Thesis defense - Pranav Nair.png


To kick things off, I conducted literature review of the space of self-driving vehicles as it related to the field of human-computer interaction. To narrow my focus, I specifically reviewed academic publications on the topic of driverless car interfaces over the past five years to understand some of the challenges currently being faced by the technology.

To be able to effectively present the state of the art to you, I would like to take this opportunity to explain some basic terminology.


What is automation?

Pilots coined the term while referring to the autopilot in the cockpit of the aircraft. A more academic definition describes it as machines performing tasks ordinarily performed by people. It is Traditionally used when the task is:

  • Impossible or dangerous (sending drones instead of humans in hazardous environments)

  • Difficult/unpleasant (garbage collection/delivery trucks)

  • Extend human capability (aircraft autopilot systems to assist pilots in navigation)

But why automate cars?

Some of the most common reasons voiced by leading car manufacturers are:

  • To enhance overall road safety. A US DEPARTMENT OF TRAFFIC Statistic estimates (~94% crashes due to human error- USDT, 2015)

  • Reduce traffic congestion, cars are more efficient if they can all communicate and coordinate with each other on the roads

  • Automation also has the potential to improve driver well-being by reducing the cognitive load required to drive a car.

  • Introduce new products and services: Companies can now feed you more content on the go (AT&T, Comcast)

When we think of automation in  cars, we tend to think of autonomous cars, but a lot of systems are already automated in cars.

  • Alerts (e.g. check engine, tire pressure)

  • Lane Assist

  • Gear shifting

  • Adaptive Cruise control - regulates speed

  • Parking

  • Driving???



Currently the Society of Automotive Engineers (SAE) states that there are 5 levels of automation. Ranging from….


Level 0 - No automation, complete manual control

Level 1 - ABS, Power Steering

Level 2 - ACC, Lane maintenance, guided parking where car parks itself

Level 3 - where teslas are at … Do involve a manual takeover


That being said, researchers argue we aren’t at 3, but 2.5....it should be described as requiring the driver to be able to drive after some alert and transfer of control. The amount of time allowed is still unspecified, which is why L3 is still not entirely feasible at this point.



Given the current level of automation, a common protocol that is followed between automation and driver is that of a Take Over Request  or TOR. When automation reaches its system limits, ie a situation it can no longer handle, it submits a TOR to the driver to assume manual control.


A study*** compared priming drivers for TORs 5s and 7s before automation failure and found that although drivers reacted faster during 5s warnings - reaction quality was poorer. Given this, it is fundamental that the driver is made aware of this transition early enough to avoid potentially dangerous situations and to ensure a comfortable take-over process.




There are still certain challenges faced my drivers when trying to complete take over requests while seated in autonomous cars.


Out of the loop: People have been observed to find it extremely difficult to “jump back in” and perform a manual takeover of the driving role when seated in a partially autonomous vehicle. This has resulted in them performing poorly while resuming manual control of the car


Excessive trust: Studies have found that drivers not only engage in a wider range of tasks under autonomous conditions, but also an increase in the interaction rates or frequency of occurrence for some of these tasks. The same study found that once the riders trusted the system, they would more readily engage in activities considered hazardous in non-autonomous situations such as responding to emails, watching videos, or texting frequently.


Reduced SA: Due to increased distraction from secondary activities, riders might also not notice relevant information (things in road, cars driving in adjacent lanes) which become critical details when resuming manual control and trying to gauge a risky situation on the road

One of the reasons for such poor reaction quality, researchers have hypothesized, is because current TOR protocols still require the driver to perform multiple tasks in a very short duration of time to effectively complete the request and perform an evasive maneuver.


For a successful TOR:


  • The driver would need to visually focus on the critical event

  • Move their hands to grip the steering wheel

  • Keep their foot ready to press the gas or brake pedal

  • Then perceive the event correctly and react accordingly


After conducting a second round of literature review focussed on how interfaces can improve Take over requests. Some promising solutions emerged:

Studies have indicated that providing the TOR requests through a combination of audio and visual cues in the area of the user’s attention (active device) has captured the user’s focus.


It was also found that Bimodal TORs yielded mean steering reaction times that were slightly  faster than unimodal TORs


Mechanical movements are more effective at gaining a user’s attention than purely visual warnings.



To Summarize, here were some of the insights I gain from my literature review:


  • There is potential in warning drivers using non-conventional methods in cars.

  • Effective/Fast reaction time from a driver does not necessarily result in a better reaction quality.

  • Mechanical motion seems to be more effective at gaining user’s attention in their visual periphery.

  • When people trust in an autonomous system, they are more likely to engage in riskier activities.

  • Human attention span is continuum. With multiple devices vying for a user’s attention.

  • Users change their engagement level  with an interface depending on the difficulty of the task at hand.

Reflecting upon these insights lead me to the research question I wanted to pursue.

"How can I design directional alerts to improve human reaction times within the context of semi automated vehicles?"


I wanted to build off past work that highlighted benefits of providing these alerts to a user on their active device. My goal was to understand whether directional information would augment the user’s ability to react faster to an oncoming point of interest. I proceeded to brainstorm ways in which I could design a mobile interface that would achieve this goal.


While generating my concepts, I came up with additional design constraints based upon my own experience as a designer, to ensure whatever solution I proposed would face minimal implementation challenges:

  • Solution must exist within the user’s computing ecosystem. No additional wearbles, gadgets, peripherals

  • User must be not spend too much time trying to perceive the message. UI should not be confusing for the user to have to spend time trying to understand what the interface is trying to communicate

  • Solution must augment user’s situational awareness, not distract from it. They should be able to react faster to a situation

  • Solution must additionally assist user with their driving quality, i.e. help them perform better evasive maneuvres

  • Must be easy to integrate into user’s everyday routine, must not require additional steps for implementation on the user’s end



The method I propose in this thesis is inspired by the gaming industry. Different genres such as first-person shooters (e.g. Call of Duty) and action role-playing games (e.g. Mass Effect) have taken advantage of directional indicators in different 3D scenarios to communicate a threat or Point of Interest (POI) located outside the gamer’s Field of View (FOV) using different abstract symbols such as arrows, avatar, mini-maps or other visualizations. These techniques have been found to be extremely efficient in multiplayer gaming communication to assist players in immediately identifying the direction the hazard is coming from using minimal information. 


One key difference in implementation, however, lies within the next steps a user needs to perform upon receiving the alert. Within the gaming environment, users do no need to change their orientation, they simple move their cursor towards the POI. However, in a driving scenario, the user must reorient themselves to look outside to be able to locate the POI. This interaction gestalt prompted the exploration of solutions that went beyond the limits of the screen of a non-directional interface.


Within those constraints I came up with concepts for two mobile interfaces. The central and peripheral interface, named so based on where they appear within the user’s visual area of focus.

2.3.1 Central User Interface

The purpose of the central user interface was to communicate a POI alert to the user in the direct area of the  their visual attention. It was designed to interrupt the active task a user was performing on their mobile device and update the screen with a user interface that displays the directional alert. The main objective of this solution was to grab the user’s attention as soon quickly as possible.


  • In the direct area of the their visual attention

  • Grab the user’s attention as quickly as possible

  • Interrupt the active task a user is performing on their mobile device

  • Update the screen with a user interface that displays the directional alert

2.3.2 Peripheral User Interface


Why peripheral? So as to not disrupt the user’s task, but still be able to warn them about an incoming point of interest, as this gives the user the autonomy to resume their active task before reacting to the POI.  It achieves this by communicating a POI alert to the user in the periphery of the user’s attention.

  • In the periphery of their visual attention

  • Do peripheral cues trigger a  fast reaction from users for gaining SA?

  • Designed to engage without interrupting the active task user was performing

  • Would manifest in the periphery of user’s mobile device


If the solution were to work, then a case could be made to dictate hardware guidelines of future mobile devices to include a peripheral interface that could warn drivers of oncoming hazards


Inspired by research through design methodologies, presented in front of you is the framework i followed. The arrows within the framework indicate how one set of methods informed the next steps within my process. Inspired by research through design, this framework forced me to built, evaluate and iterate on my ideas every step of the way. Starting from initial sketch concepts which were evaluated and refined in a participatory design workshop, right through to a usability study with high fidelity, low resolution prototypes that provided me with insights and final design suggestions.

Research Framework.jpg


As presented in the framework, A participatory design session was organized to collect feedback on the early stage concepts for the directional interfaces and on the user study design from designers (the participants) within the Human-Machine Interface Lab at Georgia Tech. Four participants with more than three years of user interface design experience were invited to a participatory design workshop.

Each participant was individually seated in front of a mock-up low fidelity simulator environment and was introduced to the basic user scenario of a passenger distracted by their mobile device during a journey in an autonomous vehicle. The participants were presented with low fidelity mobile phone prototypes which included a blank sheet of paper that depicted the real estate or area within which their attention would be focused. Once briefed through the driving scenario, the participants were presented with each of my concepts and asked to visually interpret how they felt the alert would manifest itself on the interface through word, gestures, or sketches. This visual interpretation was documented by requesting each participant to sketch on the piece of paper provided and think aloud as they worked through how each solution might operate for a given use case.

Laser-cut cellphone templates for participants to communicate ideas visually. Blank area represented space within which user’s attention would be focused.


They were then seated in a low -fidelity simulator and briefed on driving scenarios that traditionally require AV to submit a TOR.


Participants were presented with each of our concepts and asked to interpret how they felt the alert would manifest itself on the interface through word, gestures, or sketches


The participatory design workshop provided several insights, suggestions, and recommendations that informed key aspects of the design of the user interface alerts, as well as the user study created to evaluate them.

  • The researcher should articulate the autonomous driving experience with an example at the very beginning of the user study. This helps reduce confusion regarding intended functionality of the solution.

  • While some designers suggested using light as a secondary modality, their voiced their concerns about the visibility of LEDs in daylight. Hence, Lighting conditions may be important to control and keep consistent during the user study

  • One designer mentioned that during high immersion, such as when playing FIFA, they were unlikely to notice movement in their periphery unless it was made extremely obvious

  • As designers started to get immersed in the scenario and role-play, certain user behaviours also emerged:

  • They wanted the alerts to help reorient their heads to street-level, i.e. street view

  • They also mentioned they would like to know that the alerts had not disrupted their active task. (e.g. add an “app has been paused” message to the screen)

  • Voiced their concerns about excessive information on the UI and how it may distract them further instead of making them look up

  • With regards to the design of the alerts, participants responded well to compass like forms.

  • Suggested adding secondary interactions for the peripheral alert, such as a laser pointer at the tip of the compass to indicate where to look on the windscreen

Based on the insights I received from the workshop, I proceeded to develop the prototypes I would be using in my user study. Given the time and resources at my disposal, I decided to use the wizard of Oz approach to build systems that could convincingly simulate the functioning of both interfaces.


Based on the insights I received from the workshop, I proceeded to develop the prototypes I would be using in my user study. Given the time and resources at my disposal, I decided to use the wizard of Oz approach to build systems that could convincingly simulate the functioning of both interfaces.

3.2.1 Central User Interface

The central user interface was designed in Adobe Illustrator and After Effects. The interface, as pictured in the diagram consists of an arrow with a ring for providing spatial reference to a user similar to a compass. When engaged, the arrow would move to highlight the direction in which the POI was located, as well as update its angle to match the direction in real time. The designed animations and motion graphics were exported in MPEG format to be used in the user study. As this alert was to interrupt the user’s active task, I chose to use a custom-built game, Turtle Survival, as the screen task (the secondary task for “distraction”) in the user study. This provided me with maximum freedom to control the simulation, while keeping study participants engaged during the simulation. The alert would engage in response to a predetermined timer that would trigger every time the application was opened. The timer event was created so that I could modify and update when the alert would engage based upon the respective simulation.

  • Design using Illustrator and After Effects

  • Inspired by digital compass-like forms

  • Arrow pointed towards POI relative to user orientation

  • Integrated into custom-built iOS game: Turtle Survival

  • Alert engaged based upon predetermined timer

Central UI.png

​Here’s a demo of the interface being engaged in app. As you can see, I set the timer in Swift (XCode) and then upload the game to my iPad, which was the mobile device being used in the study. Users would then start playing the game 

3.2.2 Peripheral User Interface

The peripheral user interface took the form of a physical arrow that would be attached to the back of the mobile device. This prototype would be hidden from sight in the front , until triggered to engage , at which point it would present itself on the relevant boundary of the mobile device. The user interface of the alert was designed using physical prototyping methods. To determine its temporality, this research took inspiration from findings published which highlighted how mechanical motion caught a user’s attention faster than traditional change of states, such as color or brightness, caused by ambient lighting. After determining the desired motion, the mechanical and electrical structure that would engage the mechanism was engineered. In this instance, a rack and pinion mechanism would work in sync with a rotational motion on the compass itself to attract the user’s attention. The whole system was designed on the open source Arduino platform. All non-electrical parts of the prototype were laser cut from chipboard except the rack and pinion mechanisms, which were cut from acrylic. The parts were designed and assembled using the open-source DIY platform Paper Mech’s rack and pinion design as reference. Modifications were made to allow the prototype to function as a peripheral on the mobile device used in the user study. A high torque servo was used to support the extra weight of the micro-servo and physical arrow. The physical arrow itself consisted of a laser cut base upon which a linear layer of Adafruit’s Neopixels was glued to provide lighting that would match that of the digital prototype. The arrow was finally covered with a sheet of mylar to diffuse the light across the entire form and make it feel like a coherent form. During the user study, the peripheral prototype was mounted on the mobile device.

  • Design using physical prototyping methods

  • Uses mechanical motion to try to attract user’s attention

  • Interface hidden from user’s sight on the back of the device

  • Once engaged, would point in the direction of POI relative to user direction

  • Alert engaged based upon predetermined timer inside Arduino micro-controller

  • Used LEDs as an additional modality to grab user attention


​Here is a sample video showcasing how the arrow would manifest on the periphery of the user’s device. As mentioned previously, the goal of such an interface would be to not interrupt the user’s active task. Hence, if they choose to do so the user may keep playing turtle survival.

3.2.3 Baseline User Interface

A non-directional user interface, was designed and used as the baseline and defined as a traditional interface which simply provided an alert for an oncoming POI with no additional directional information. This alert would engage similar to the central interface, based upon a predetermined timer event shown in-game.

baseline UI.png


3.3.1 Simulation Setup

To conduct this user study in a controlled environment, I collaborated with PhD students at the Sonification lab and gained access to one of their low-fidelity driving simulators. The simulation setup consisted of a high definition Television, steering wheel and foot pedal game controller. Participants of this simulation were seated in the chair. The steering wheel and foot pedal were present only to serve as dummy devices to reinforce immersion of sitting inside a vehicle. Participants were not required interact with the simulation at all. During an active session, the television plays first person view footage of a car driving.

  • HD TV (Monitor) + foot pedals + steering wheel

  • Participants of this simulation will be seated in the chair

  • The steering wheel and foot pedal are present only to serve as dummy devices.

  • Participants are not required interact with the simulation at all. 

  • HD television plays first person view footage of a car driving.

  • HD TV (Monitor) + foot pedals + steering wheel

  • Participants of this simulation will be seated in the chair

  • The steering wheel and foot pedal are present only to serve as dummy devices.

  • Participants are not required interact with the simulation at all. 

  • HD television plays first person view footage of a car driving.

simulation setup.jpg

3.3.2 User Study Design

As the simulation used a full autonomous vehicle, participants were not required to have any prior experience with driving to qualify for this user study. 15 participants (13 males and 2 females), aged from 19 to 32 (M=25.06, SD=3.05), volunteered their time from the School of Industrial Design. Participants’ driving experience ranged from 1 to 16 years (M=7.26, SD=4.62). IRB approval was obtained ahead of the experiment.

The study employed a 2x3 repeated measures within subjects factorial design, where the effect of 3 types of interfaces, namely traditional, central, and peripheral, were tested with the participants across two different time instances,  30 seconds and 120 seconds, after the beginning of each round of testing. Using findings from the participatory design workshop, the study operated under the hypothesis that time spent on secondary task was directly proportional to immersion. 


During each round, subjects’ reaction data and behaviors were collected through eye tracking, observations and video recordings. 

The data was then run through a 2x3 repeated measures ANOVA (Analysis of Variance) for further analysis.

  • Employed 2 x 3 repeated measures within subjects factorial design

  • Three types of alerts: traditional, central, and peripheral

  • Two immersion conditions: 30 seconds, and 120 seconds

  • User reactions were recorded using eye tracking, observations, and video

  • Reaction time data was run through 2x3 repeated measures ANOVA 

Here’s some sample POI footage shown to users during the study. Dash-cam footage was obtained from Youtube of the first person view of a car navigating a road and then, using Adobe After Effects, points of interest were digitally inserted into the video.

3.3.3 Testing Procedure

After completing the consent forms, participants were introduced to the simulator, Tobii Glasses, and the mobile game they would be required to play as the secondary task. The test started with a warm-up session instructing the participant about how each alert worked to communicate a Point of Interest in the video simulation, as well as familiarizing them with how they are to callout it out when they recognize it. They were also familiarized with the mobile device (iPad 12 inch) and the video game controls at this stage. They were instrumented with Tobii eye tracker equipment to track their gaze. After the warm-up session, the user study was initiated. It consisted of six rounds of testing. In each round, participants were taken through a road trip in an autonomous vehicle. During the trip, they were instructed to perform a secondary task, i.e., playing a mobile game until prompted otherwise. Participants were requested to bring their own headphones or earphones and wear them during the process of the simulation to reduce audio inputs they received. Alerts were sent to the mobile device at a predetermined interval of either 30 and 120 seconds during each simulation. When alerted, participants were required to look up and call out the POI on the screen as soon as they recognized it. Each simulation would end as soon as the POI left the screen.

  • Warm-up session to familiarize participants with alerts, and secondary task (game)

  • Study consisted of six rounds of testing overall

  • Participants taken through a road-trip scenario in each round

  • Instructed to perform secondary task, i.e. play a game, until prompted otherwise

  • When alerted, participants were required to look up and call out POI on the screen as soon as they recognized it.

  • Session ended as soon as POI left the screen

3.3.4 Data Collection Methods

At the end of each round, there was a two-minute break during which participants were asked to identify the POI they had seen in the previous session and locate it on the screen by drawing the POI on a sheet of paper. The sheet of paper contained the frame of the screen of the low fidelity simulator to act as a reference.


Upon completion of all scenarios, participants were asked to complete post study surveys based on SUS (System Usability Scale) and NASA’s TLX (Task Load Index) questionnaires and answer a series of open-ended questions. Finally, they were asked to sketch ideas they might have for improving both central and peripheral prototypes, or alternatively, writing them in the form of notes/comments. 

  • Eye tracking and video recording  was used capture participant reaction time data

  • Between each round, participants were asked to identify the POI they had seen in the previous session and locate it by drawing the POI on a sheet of paper

  • At the end of the study, participants answered a few open-ended questions and completed System Usability Scale (SUS), and Task Load Index (TLX ) surveys

  • Finally, participants were requested to sketch ideas for improving both prototypes



A total of 90 videos were collected of which 10 instances were removed either due to video recording error, alert engaging early, or no reaction from participant. This resulted in a total of 80 videos across participants which exported individually. Reaction times were calculated based on the difference between the moment the alerts engage to the time took participants to callout the POI. Once reaction times were calculated, a 2x3 factorial within subjects repeated measures ANOVA was conducted using IBM’s SPSS statistical analysis toolkit. The means and standard deviations for each of the six groups have been presented, as well as a graph highlighting the interaction between modalities and immersion conditions.

  • 2x3 factorial within subjects repeated measures ANOVA conducted using SPSS

  • Main effect marginally significant at p = 0.54

  • Main effect of modality no significant p = 0.45

  • Interaction between immersion and modality significant at p = 0.15

  • Graph indicates huge difference in modality 3 (peripheral) between immersion conditions

The analysis revealed that the main effect for immersion was marginally significant at p = 0.54. The main effect of modality was not significant at p = 0.45. However, the interaction between immersion and modality was significant at p = 0.15. This may have occurred due to a significant difference between modalities in one of the two immersion conditions. This difference was highlighted in the graph shown.. The analysis demonstrated a huge difference in Modality 3 (peripheral) between 30 second and 2-minute immersion lengths. This would imply that the peripheral interface was not very useful once participants were immersed in the secondary task for a longer period of time but may be good for short immersion task switching conditions. Compared with the other two, the central interface does not show significant difference in reaction times across different immersions.



A decision was made to not measure reaction times using eye movement. However, within the 28 samples collected, I observed consistent eye movement patterns between participants which I believe was worth reporting, as I feel it is an interesting topic to explore in future studies. Hence, due to the small size of valid samples, eye tracking data was used as supplementary data to support user behavior observed rather than as critical data in this thesis.

  • 10/15 Participant’s data recorded. Participants with glasses excluded from eye tracking

  • Heavy data loss  due to dated equipment. Only 28/60 samples collected

  • Decision made to not analyse reaction times

  • Within 28 samples collected, observed consistent eye movement patterns between participants

  • Topic worth exploring in future studies



That being said, the SUS scores for both the interfaces provided much greater insight. Especially when analysed with the qualitative feedback received from participants via the post study interviews.

4.3.1 Central User Interface SUS Scores

The central UI prototype received an average SUS Score: 82.7, Learnability at 81, and Usability at 82.9. In SUS terms this would be considered a high score. 14 out of 15 participants found the digital interface easy to understand and use as it was presented on a platform (iOS) that users were already familiar with. 12 out of 15 participants reported that they preferred the central interface because it required fewer steps than the peripheral interface to engage the Point of Interest and hence, they felt their reactions were faster. Participants appreciated the interface interrupting their active task to inform them of the POI. However, although 10 out of 15 participants tested preferred the central prototype over the other two, some concerns were raised about the interface itself. For example, two participants highlighted that they would only want to be interrupted if it were an emergency. It reflected the similar user expectations to ones found during the initial participatory design workshop.

Three participants voiced the fact that the reference ring surrounding the arrow of the central interface was not very helpful. Two participants complained about the animation the alert performed before fully engaging being too slow.


4.3.2 Peripheral User Interface SUS Scores

The peripheral prototype received an average SUS Score: 76.5  with Learnability at 80 and Usability at 75.6. In SUS terms this would be considered slightly above average, however, it is important to note this study deployed a small sample size. Similar to the central prototype, participants highlighted that the peripheral prototype was easy to understand once researchers demonstrated its functionality during the demo session. Three participants appreciated the fact that the prototype allowed them to continue participating in the secondary task without interruption. Six participants felt they engaged the peripheral alert at a faster rate due to audio and mechanical feedback received by the turning of the servo motors. This meant they were aware of the peripheral prototype by other sensory channels before noticing it visually. Meanwhile, the physical materiality lead to participants voicing their concerns regarding implementation of such a solution. The most consistent feedback received in this regards related to wear and tear associated with a mechanical system similar to the peripheral prototype. Additionally, participants were concerned about breakage or device malfunction should the physical arrow hit any surface of the vehicle’s physical interface accidentally. As the interface was attached parallel to the back surface of the mobile device, it was not clearly visible in certain orientations. This forced some participants to have to tilt their devices to properly view the alert, causing minor frustration, which was communicated by two participants. Finally, two participants felt that the weight and the additional added aesthetic of the peripheral prototype to the mobile device was a concern from a product development standpoint.



That being said, Sketch feedback on ways to improve each interface were analysed collectively in the format displayed infront of you to identify common threads/themes that emerged for redesign. The first set of sketches analyzed were for the Central User Interface:

**Note to Readers** -- Click on the gallery to zoom in and Hover over for insights!

4.4.1 Central User Interface - Sketch Feedback


Overview of sketch feedback received. Scroll for insights!

press to zoom
Compass form factor
Compass form factor

10 out of 15 participants recommended keeping the arrow/compass like form for the central UI as the primary indicator. Since it is a universally recognized icon, my assumption is participants considered it intuitive enough to gauge what was going on.

press to zoom
Planarity to direction of motion
Planarity to direction of motion

Finally, three users suggested making the arrow planar to the road as it would then have a 1:1 mapping with how users were actually oriented within the vehicle. Making it easier for them to recognize the correct direction in which they are supposed to be looking.

press to zoom

Overview of sketch feedback received. Scroll for insights!

press to zoom

4.4.2 Peripheral User Interface - Sketch Feedback

The peripheral interface ideas were studied in a similar manner, and as seen previously, some consistent patterns/themes started to emerge.

Peripheral UI - Overview
Peripheral UI - Overview

Overview of feedback received to improve the peripheral interface. Slide for insights!

press to zoom
Arrow form factor
Arrow form factor

Like the central interface, most participants built off of the already existing arrow like form as they perceived it to be intuitive enough to understand.

press to zoom
Adaptive Orientation
Adaptive Orientation

Two users voiced their concerns about having to tilt the screen to view the indicator. Hence, some suggestions were made to make the actual physical arrow either longer or be able to eject from the edge of the mobile device in a manner that orientation would not impact its visibility.

press to zoom
Peripheral UI - Overview
Peripheral UI - Overview

Overview of feedback received to improve the peripheral interface. Slide for insights!

press to zoom


Referencing the qualitative and quantitative feedback received, I proceed to make design recommendations for the next iterations for these concepts.


5.1.1 Final Design Concept - Central UI


Some participants indicated that the distance information provided by the central prototype was insufficient as they did not bother with reading text on screen. Instead, we propose a system where the interface pulses different colors, indicating distance of the POI from the vehicle. This would additionally help participants anticipate the urgency of the situation and assist them in reacting accordingly. Users did highlight that instead of simply pointing in a direction, they would prefer if the interface provided some form of live feed highlighting the exact object the vehicle believes is the POI. As mentioned previously, this feature would be extremely useful for instances where multiple POI are involved. 

  • Re-orient arrow to be planar to the road

  • UI would pulse a gradient of color to indicate distance from POI

  • Animation at the tip of the arrow to reinforce direction information

  • UI would outline POI to assist user in object recognition


5.1.2 Final Design Concept  - Peripheral UI

In the improved concept for the peripheral interface that was  was generated, the primary feature would involve the peripheral interface using the internal gyro of phone to detect device orientation with respect to the user and adapt its form accordingly so as to become visible. Multiple users complained that they were unable to view the peripheral interface when it engaged, as it was outside their field of view. This forced them to perform an extra step where they had to tilt the screen to view where the interface was pointing. An adaptive form which engages based on phone orientation would move toward addressing this problem.  Similar to the central interface, the color of the LEDs depicts the distance of the POI from the vehicle. Finally, although this study focused on visual information, it became clear that participants preferred the addition of sound and vibration as it startled them into action.

  • Adapt form length according to device orientation

  • LEDs would use color gradient to signify distance information

  • LEDs play animation to highlight urgency

  • Addition of audio/haptic feedback to engage users faster



Upon reviewing the feedback received through user interviews and sketch suggestions, it became clear that all participants, bar one, preferred receiving directional alerts; they reported that the alerts assisted them in identifying the point of interest in a faster manner. Additionally, multiple participants reported directional indication making the task easier from a cognitive and physical standpoint. One participant’s quote articulated the sentiment as follows: “I felt like I had to only worry about one side of the screen instead of looking for the POI in the entire screen”. Overall, people cited the digital prototype’s compass-like design and task disruption as reasons for preferring it over the peripheral one. Additionally, participants felt it would be easier to implement a digital solution as that would only require them to “download an App” and that the solution would then be platform agnostic.

  • Participants clearly preferred directional alerts as they perceived it reducing overall cognitive load.

  • “I felt like I had to only look at one side of the screen for the Point of Interest”.

  • Participants felt central interface easy to implement as it only requires them to “Download an App”.

  • Peripheral interface shows greater potential.

  • Peripheral UI feedback may have been influenced by aesthetics of the prototype.


I understand that the length of engagement should be longer for such a study. However, given the low fidelity of the simulation I was using, I observed it to be extremely difficult to maintain immersion for participants for more than 3 minutes during any given session. My pilot study informed me of the ideal immersion times I should keep based upon the design of this specific simulation.


I investigated the impact of immersion on user reaction to alerts by defining it as a variable directly proportional to time spent on the secondary task. My initial hypothesis stated that the more immersed users were in a task, the slower they would be to react to an alert as they dedicate more cognitive resources towards their mobile device. Results from the ANOVA indicated no significant differences in reaction times between the immersion conditions for both the digital and non-directional interfaces. That being said, there was a significant difference in reaction time observed for the peripheral interface. Participants were slower to callout the point of interest when subjected to the 2-minute immersion condition. Which would imply that users experience difficulty noticing an alert in the periphery of their visual attention when they are more immersed in a given task.

  • Length of engagement should be longer.

  • Immersion length did not seem to influence reaction times for central and traditional interfaces.

  • ANOVA indicated significant differences in reaction times for peripheral interface between immersion conditions.

  • Suggests users experience difficulty noticing alerts in periphery when they are more immersed in a task.


After reviewing reaction videos and quantitative data, it became clear that in my user study: directional UI is slightly better than non-directional UI at trigger reaction times from users to recognize Points of Interest. Meanwhile peripheral UI performed significantly better than the other two interfaces for short immersion conditions but faces challenges in long immersion. This would imply that the interactive system should adjust the alert method based on perceived user immersion time. Based on qualitative feedback received from most participants, it became clear that the participants generally preferred the directional alerts over the non-directional alert. This would indicate that building interfaces that can assist a user in locating the POI does lead to a more comprehensive user experience as it reduces cognitive load on participants.


Furthermore, this user study was able to identify several behaviors that influence the manner in which a passenger may potentially react to stimuli. First and foremost, based off of our ANOVA, we observed that immersion had a direct correlation with the speed at which participants responded to external stimuli. We conclude that the more immersed a participant is in their active task, the harder it is for them to react to external stimuli in a timely manner.

  • Directional user interfaces assist users in gaining situational awareness slightly faster than traditional user interfaces

  • Findings from user studies suggest autonomous system should adjust alert methods based on perceived user immersion in secondary tasks

  • Directional interfaces provide to a more comprehensive user experience for passengers of autonomous vehicles

  • More immersed a participant is in their active task, harder it is for them to react to external stimuli


During this study, visual modalities were specifically chosen as we believed it would be hard to replicate sounds and vibrations experienced by a driver in a simulated environment. That being said, we acknowledge the differences in visual perception when seated in a controlled environment versus when in motion. The motion of the vehicle and surrounding objects could impact user reaction and perception times to external POI. Our simulator also provides a limited FOV compared to a real-world scenario, locating a POI in a 360 FOV would be more challenging than a 2D screen. For the peripheral interface, participants reported perceiving faster reactions due to the sound and vibration associated with the servo motor that was driving the peripheral interface. This was an unintentional effect of the prototype used during the user study, however, it did not seem to influence overall reaction times of participants. When asked to provide SUS scores and qualitative feedback, the functional aesthetics of the peripheral interface seemed to distract participants. This may have influenced the overall SUS scores for the peripheral interface. Finally, eye tracking data was unable to provide significant quantitative evidence regarding increase in user reaction times. This technical limitation means that further investigation using more sophisticated equipment and study design is required.

  • Hard to replicate real world sounds/vibrations in a simulator.

  • Simulator provided limited field of view compared to an actual vehicle.

  • Eye-tracking did not provide desired quantitative data.

  • Small sample size of participants.

  • Points of interest different in terms of urgency.


In a future study, as we are testing in conditional automation conditions (level 3), we would introduce another step where users must complete a TOR after receiving alerts from our interface. We hope to understand whether improving user experience of TORs can lead to an overall improvement in driving quality post take over, an attribute of TOR that multiple studies have highlighted requires further investigation.

  • Next study will require users to perform a TOR.

  • Use a higher fidelity simulator, better eye tracking equipment.

  • Counter-balance each scenario with total number of participants.

  • Present participants with both “looks-like” and “works-like” models.

  • Conduct a pairwise comparison of POIs to ensure consistency.