A Morose and Downbeat Woman is My Co-Pilot

Man Who Lied To Laptop300
For the first century of the automobile's use, passengers were always people or pets. However, in the past decades, automobiles have begun to carry a new "passenger": a voice-based computer agent used to give directions, warn of problems (e.g., "your oil is low"), control entertainment (e.g., "you are now listening to KQED"), and make suggestions (e.g., "the closest Starbucks is 2.3 miles away"). As a social scientist who studies human-technology interaction, I've guided my design of and research on these "virtual passengers" by studying real passengers. By leveraging those attributes that make passengers likeable and non-distracting, one can then make GPS systems, voice-activated controls, and other voices in the car more desirable and effective. For example, we've found that people adjust their way of speaking to match the situation in the car: when the driving is dangerous passengers unconsciously shorten and simplify their sentences. There are now GPS systems that do the same. Similarly, when BMW found that German drivers wouldn't take directions from a female voice and had to have a product recall, they found a voice that better matched their brand: a male "co-pilot."

One of the most important issues to address in car interfaces is how to deal with upset drivers, as negative feelings are among the primary causes of accidents on the highways. Unfortunately, there is little known about effective strategies that passengers can use when dealing with an upset driver. In particular, should a passenger — real or virtual — in a car with an upset driver sound happy and upbeat or depressed and morose?

As an experimentalist, I decided to obtain upset drivers, combine them with either a happy or upset passenger, and see what happened. While one might want to do the study with real passengers, it's often much more effective to study people's reactions to interactive media directly.

My lab and I had participants use a driving simulator with a gas pedal, brake pedal, and force-feedback steering wheel. Along for the ride was a "virtual passenger," a recorded voice played by the car. The voice was of a female actress and made light conversation with the participant throughout the drive. The passenger's remarks encouraged the driver to talk back. For example: "How do you think that the car is performing?", "Do you generally like to drive at, below, or above the speed limit?", and "Don't you think these lanes are a little too narrow?" While the voice said the same 36 remarks to all the participants, its tone of voice was clearly happy and upbeat for some participants, and clearly morose and downbeat for others.

The sad voice sounds almost laughable to most people, and it would seem obvious that these upset drivers would prefer and benefit from the happy voice. To check this, using the simulator and coordinated technologies, we recorded the number of accidents the participant had and how much attention the participant paid to the drive. We also measured people's social engagement with the virtual passenger by recording how much the participant spoke with the agent. After the driving was over, we asked participants a number of questions about their feelings about the car and their driving experience via an online questionnaire.

What happened to these upset drivers? Did the happy passenger help cheer up the drivers? The simulator results suggest an emphatic "no." The happy voice in fact worsened upset participants' driving: upset drivers hearing the happy voice had approximately twice as many accidents on average as the upset drivers hearing the depressed voice. Upset drivers with the happy voice were also less attentive to the road than those with the voice that was clearly flat.

The questionnaire results also suggest that upset drivers were happier with a subdued, rather than happy, virtual passenger. Specifically, upset drivers enjoyed driving more, liked the voice more, and thought that the car was of a higher quality when the virtual passenger was upset. In addition, even though you might think that an upset passenger and an upset driver would avoid conversation with each other, upset drivers spoke much more with the depressed "passenger" than they did with the happy one.
Why didn't the upset drivers benefit from the happy voice? When people try to process and pay attention to emotions that differ from their own, it takes a great deal of cognitive effort. As a result, the drivers were distracted, uncomfortable, and performed worse. Furthermore, when the virtual passenger continued to cling to her initial emotion, drivers felt the lack of empathy, even if it was only a technology that was hurtful.

While it is satisfying to inform the design of the car's interface to make drivers safer and more enjoyable, it turns out that there was another benefit. In over one hundred experiments, research emerging from my lab has shown that social behaviors and responses appear in full force when people interact with technology. That is, people treat computers as if they were real people. As a result, just as we can use the most successful social behaviors to inform technology design, we can use studies with computers and other technologies to derive rules that will teach people how to win friends and influence people. Indeed, in my new book, The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships, I describe almost one hundred rules for social behavior that can be derived from experimental studies of how people use technology and that can make people more likeable, effective, and persuasive. The current study gives us two principles to guide interactions with people (as well as technology): telling upset people to "look at the bright side of life" can be off-putting, and "misery loves miserable company."

Buy The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships on Amazon