In user research, laddering interviews are particularly helpful in eliciting goals and underlying values. However, laddering interviews do not scale due to being time and training intensive. In this study, we propose and evaluate Ladderbot, a text-based conversational agent (CA) capable of facilitating human-like online laddering interviews. Ladderbot uses techniques inspired by face-to-face laddering to engage in an interactive conversation with users. In a between-subject experimental study with 256 participants, we compare Ladderbot against established survey-based laddering approaches in exploring user values for smartphone use. We find that on average, participants participating in CA-based laddering interviews produce twice as many and significantly longer answers. Additionally, we identify the learnability of the CA-based interviews to be significantly higher compared to established survey-based laddering approaches. However, survey-based laddering more reliably produces ladders that end in values, while CA-based laddering trades clear attribute-consequence-value structures to explore negative gains. Therein, besides presenting a new CA-based laddering approach, our study has implications for how user researchers can utilize both survey- and CA-based laddering methods to paint a more complete and comprehensive picture.