Failure as a concept can refer both to a lack of objective success and a perceived lack of success in living up to others' expectations, requirements, or standards. Both kinds of failure tend to be seen as undesirable, and when an entity fails in some way, this has effects on how the entity is evaluated by those it interacts with. But failure is not all bad. Since it is human to err, erring can also potentially foster the perception of human-like qualities in non-humans. This allows for a discussion of strategic robot failure, which entails intentionally designing robots that are perceived as failing (by the human), while they are actually successful in achieving the (hidden) objectives of their designer. Such design strategies involve the use of deception to shape, for example, humans' trust in robots, to engender effective human-robot interaction (HRI). This article begins with a brief description of research on failure in HRI, with an emphasis on understanding the implications of robot failure for human trust and reliance in the robots. I then turn to the concept of failure and distinguish between an objective component (lack of success) and the subjective component (failure as not meeting the others' requirements or standards). This makes failure a relational concept that can only be fully understood through context and knowledge of the preferences, values, and expectations of the human in HRI. Through these considerations, I conclude by discussing the potential positive and negative implications of strategic robot failure, with a closing discussing of potential ethical objections to strategic robot failure.