top of page
Input

modelling states, exploring AI in interaction models, Google Dialogflow

Design Problem

This is an exploration into the ethical problems arising from in-car voice command systems.

In the development of AI systems, there often is a tradeoff between transparency and convenience. 

Design Brief

In a team of 2, we designed good and bad interaction models based on prioritizing convenience.
Our use case scenario was finding a gas station.

Methodology

Smell

For the 'good' model, we used Google's Dialogflow to prototype this. For the 'bad' model, we created a short video.

Good Model

Example conversation: 

(idle state triggered by voice input that asks a question with a location keyword)

How do I get to the nearest gas station?

Where is the nearest petrol station?

 

Example reply:

(State 2: Response)

There are 3 nearby gas stations. Would you like to go to Shell, Chevron, or BP?

 

At state 2, the user (driver) will either respond with a clarification that will enter the error state, or proceed with one of the presented options into state 3. 

 

Error state

For example, if the user intended to ask “How do I get to the nearest gas station with diesel?”, the voice command system would either not recognize the speech (eg. “I did not get that”), or wait for further user prompting to proceed to state 3. 

 

Finally, the system will execute the decision, changing the display and output to state 4.

Bad Model

Example Conversation: 

Transition to idle to state one by the phrase:  “Where can I find gas?” 

Computer processes information and presents a decision in Stage 2 saying “Follow the directions for the nearest gas station”

In the event that the computer doesn’t understand a command, clarification is done through the error state.

Our video is here.

Ethical Tradeoffs

Prioritizing Convenience

  • Skips the “response” stage, moving from the idle state directly to the decision.

  • Reduces the cognitive load placed on the driver as it minimizes choice and interaction with the voice recognition software, instead offering the most convenient option without clarification.

Implications

The tradeoff for this convenience is the transparency gained from the “response” stage of the interaction.

 

Instead of offering a variety of options, the software must choose between them on behalf of the driver, making a value judgement based on unknown criteria, which results in complications:

 

  1. A gas station might sponsor a company to redirect all voice commands to their gas station, which is unethical as the driver wouldn’t know that they are driving (perhaps further) due to the partnership.

  2. The voice recognition software may misconstrue the command and, due to the lack of a clarification step, direct the driver to an improper gas station eg. diesel.

 

Unless the voice system explains why and how they chose the gas station they selected there could be no transparency. 

Takeaways
  • Fascinating exploration into main ethical arguments of current development in AI systems,
    and the opinions of my partners

  • Learned Google's dialogflow and how it works

bottom of page