Gender Bias in Chatbot Design


A recent UNESCO report reveals that most popular voice-based conversational agents are designed to be female. In addition, it outlines the potentially harmful effects this can have on society. However, the report focuses primarily on voice-based conversational agents and the analysis did not include chatbots (i.e., text-based conversational agents). Since chatbots can also be gendered in their design, we used an automated gender analysis approach to investigate three gender-specific cues in the design of 1,375 chatbots listed on the platform We leveraged two gender APIs to identify the gender of the name, a face recognition API to identify the gender of the avatar, and a text mining approach to analyze gender-specific pronouns in the chatbot’s description. Our results suggest that gender-specific cues are commonly used in the design of chatbots and that most chatbots are – explicitly or implicitly – designed to convey a specific gender. More specifically, most of the chatbots have female names, female-looking avatars, and are described as female chatbots. This is particularly evident in three application domains (i.e., branded conversations, customer service, and sales). Therefore, we find evidence that there is a tendency to prefer one gender (i.e., female) over another (i.e., male). Thus, we argue that there is a gender bias in the design of chatbots in the wild. Based on these findings, we formulate propositions as a starting point for future discussions and research to mitigate the gender bias in the design of chatbots.

Proceedings of the 3rd International Workshop on Chatbot Research (CONVERSATIONS 2019)