The Study
Eyes Free Messaging evaluated two features of messaging, receiving emoticons (Feature 1) and sending emoticons
(Feature 2). In this user study, participants were blindfolded and unable to rely on audio cues. All participants
experienced both features and both sets associated with them. To reduce bias and fatigue effects, participants
were randomly assigned to begin with either Set 1 or Set 2 for each feature.
For Feature 1, participants began with a training phase for Set 1. They were introduced to each vibration pattern
in a fixed order, and the entire set was repeated twice. After training, the testing phase began. Participants were
presented with each of the six emoticons five times in a random sequence. A pause of approximately five seconds was
given between each vibration. As they felt each vibration, they verbally identified the corresponding emoticon. This
randomized set was presented once. The same process was then repeated for Feature 1 Set 2.
Feature 2 followed a similar structure. Participants began with a training phase for a set of gesture-based inputs
and their corresponding emoticons. As the study was conducted with a blind fold, we verbally described each gesture.
Participants then attempted to draw each gesture on the phone screen as it was described. After training, we verbally
instructed participants on which emoticon to send using the gesture input. These instructions were randomized as well
and each of the six emoticons was sent five times. This process was repeated for Feature 2 Set 2.
At the end of the study, we will ask participants which of the two sets they preferred for each feature and why. (This
is different from the accuracy test that is being conducted by the study)
Set Characteristics
We chose to group our sets for each feature by similar defining characteristics. For feature 1 our sets are comprised as follows:
Set 1: Frequency
- Like: 1 super long buzz
- Love: Heartbeat pulse x3
- Haha: 4 fast buzzes
- Yay: Fast then Long frequency
- Sad: Long then Fast frequency
- Angry: Fire alarm-like buzzing
Set 2: Number of Buzzes
- Like: 1 buzz
- Love: 2 buzz
- Haha: 3 buzz
- Yay: 4 buzz
- Sad: 5 buzz
- Angry: 6 buzz
Similarly for Feature 2 our sets are comprised as follows:
Set 1: Drawing
- Like: Draw a check mark
- Love: Draw half heart
- Haha: Swipe up
- Yay: Draw a U shape
- Sad: Draw a downward U
- Angry: Swipe down
Set 2: Number of Taps + Swiping
- Like: 2 tap
- Love: 3 tap
- Haha: Swipe right
- Yay: Swipe up
- Sad: Swipe left
- Angry: Swipe down
Hypothesis
We hypothesized that for Feature 1 (receiving vibrations), Set 2 would result in lower user accuracy. There is an
increasing number of buzzes used for each emoticon in Set 2, which we believed could cause users to lose count
beyond the third vibration. As a result, we expected higher error rates for the final three emoticons in that set,
while the first three were likely to be identified more accurately. In contrast, Set 1 was designed to make each
emoticon vibration more distinct from the others, which we anticipated would help users more easily recognize the
emoticons within a short time frame. Overall, we expected Set 1 to yield a higher average recognition accuracy.
For Feature 2 (sending via touch gestures), we predicted that Set 2 would lead to higher accuracy. The gestures in
Set 2 were simpler and less prone to being misinterpreted by the phone's input system, while Set 1 relied on users
drawing more specific shapes. Though this may potentially lead to recognition issues, we believed it would feel more
intuitive initially, since it was based on familiar shapes and concepts. As a result, we anticipated that Set 1 might
perform better early on, but Set 2 would ultimately surpass it in accuracy due to its lower learning curve. In summary,
we expected Set 1 to have lower system accuracy and Set 2 to show higher user accuracy over time.
Results
coming soon