The proliferation of hybrid service agents—combinations of artificial intelligence (AI) and human employees behind a single interface—further blurs the line between humans and technology in online service encounters. While much of the current debate focuses on disclosing the nonhuman identity of AI-based technologies (e.g., chatbots), the question of whether to also disclose the involvement of human employees working behind the scenes has received little attention. We address this gap by examining how such a disclosure affects customer interactions with a hybrid service agent consisting of an AI-based chatbot and human employees. Results from a randomized field experiment and a controlled online experiment show that disclosing human involvement before or during an interaction with the hybrid service agent leads customers to adopt a more human-oriented communication style. This effect is driven by impression management concerns that are activated when customers become aware of humans working in tandem with the chatbot. The more human-oriented communication style ultimately increases employee workload because fewer customer requests can be handled automatically by the chatbot and must be delegated to a human. These findings provide novel insights into how and why disclosing human involvement affects customer communication behavior, shed light on its negative consequences for employees working in tandem with a chatbot, and help managers understand the potential costs and benefits of providing transparency in customer–hybrid service agent interactions.