In “Robots, Trust and War” Simpson claims that victory in counter-insurgency conflicts requires that military forces and their governing body win the ‘hearts and minds’ of civilians. Consequently, forces made up primarily of autonomous robots would be ineffective in these conflicts for two reasons. Firstly, because civilians cannot rationally trust them because they cannot act from a motive based on good character. If they ever did develop this capacity then the purpose of sending them to war in our stead would be lost because there would be no moral saving. Secondly, because if robot forces did offer a moral saving then this would signal that the deploying government could not be trusted to be committed to the conflict. I disagree with both claims. I argue firstly that there are less demanding grounds that could allow robot forces to be trusted sufficiently to be effective whilst still achieving a moral saving over the deployment of human ones. Secondly, that this moral saving would not necessarily signal that the deploying body lacked commitment because its interpretation would be highly context-dependent. I conclude therefore, contra-Simpson, that robot forces could plausibly be effective in counter-insurgency engagements in the foreseeable future. I suggest therefore that there may be a case for developing a more finely grained understanding of the opportunities for, and challenges of, their use.