In the past few years, a series of well-publicized debates argued the merits of total automation of user needs (via intelligent agents) versus the importance of user control and decision making (via graphical user interfaces). Perhaps a more productive way to frame this decision is to note that there is an interesting duality between AI and human-computer interaction. In AI, we try to model the way a human thinks in order to create a computer system that can perform intelligent actions. In HCI, we design computer interfaces that leverage off a human user to aid the user in the execution of intelligent actions. What is the boundary between these two fields? an area that is becoming known as mixed-initiative interaction might turn out to be the missing link. Mixed-initiative interaction refers to a flexible interaction strategy in which each agent (human or computer) contributes what it is best suited at the most appropriate time. I became interested in this area when I read about AIDE, a system that helps a user explore a dataset using a statistics software package. AIDE both makes suggestions to the user and responds to user guidance about what to do next. In this installment of "Trends and Controversies," we have three essays about the area of mixed-initiative interaction. James Allen of the University of Rochester introduces the area and creates a useful taxonomy of mixed-initiative dialog issues. He also summarizes several years worth of research on mixed-initiative planning systems. The second essay, by Eric Horvitz of Microsoft Research, describes the role of uncertainty in mixed-initiative interaction and describes two innovative systems for semiautomated assistance that make use of Bayesian reasoning. Finally, Curry Guinn of Duke University, confronts the difficult task of evaluating such systems, including the creation of test sets and metrics for evaluating descriptive versus prescriptive dialog models. In earlier work, Guinn has developed extensive computer simulations of mixed-initiative dialogs.