Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used. Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people's work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component. This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or "imagined affordances" [1] shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice. Such a framework evaluating the benefits of AI would include the following five questions [2]: 1) What and whose goals are being achieved or promised through 2) what structured performance using 3) what division of labor 4) under whose control and 5) at whose expense? Using such a framework would resolve a couple of current conundrums for AI ethics. First, such a move would shift accountability away from technology designers to the evaluation of the political and economic environments in which pattern how technologies are adopted, modified and are used. Second, such moves encourage thinking through the challenges of capitalism and systems that structure technological affordances, rather than the individual actions of so-called bad users. Third, it reimagines the practice of use in organizational and institutional context, enabling predictions of failure at the interface between technologies and their uses. Finally, such a framework highlights places for intervention. Long global supply chains of AI systems-from data labelling work to engineering work to the front-line use of dashboard systems-often mask the opportunities that people have for intervening in the systems, making it hard for people to contest their results, and blur the lines of accountability and responsibility. A focus on ethical action highlights the choices available to individuals. Expanding the AI ethics toolkit theoretically and methodologically to include attention to social structural and institutional configurations that enable and constrain individual action will ultimately result in more robust ways of describing what people actually do with AI tools and more pathways for influencing the design, application, modification and use of responsible AI technologies.