Virtual team communication at the workplace has changed through the leverage of innovative collaboration technologies like Slack, Zoom, and MS Teams. However, virtual teams face emotional obstacles during remote communication. To mitigate these challenges, affective chatbots can be applied by increasing the teams’ emotional management capabilities. While, through advances in affective computing, these chatbots can understand human affective signals, chatbots that analyze and intervene in team emotions can increase feelings of surveillance and lack of autonomy. This ultimately reduces users’ trust. In order to address this challenge, one strategy might be to design trustworthy affective chatbots through human agency-based control features. Since research is scarce about the effects of control features on users’ trust of chatbots, in this paper, we present a research design for the evaluation of affective chatbot control features on users’ trust and cognitive load. With our work, we plan to contribute theoretical knowledge about the effects of human agency-based control features in affective chatbots. Further, we pave the way for trustworthy affective chatbots through overcoming downsides of existing affective chatbots designs.