A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as "autonomous weapon systems", and so the objection is too broad. In this article I present a taxonomic approach to the objection, examining a number of systems that would count as AWS under the prevalent definitions provided by the United States Department of Defense and the International Committee of the Red Cross, and I show that for virtually all such systems there is a clear locus of responsibility which presents itself as soon as one focuses on specific systems, rather than general notions of AWS. In developing these points, I also suggest a method for dealing with near-future types of AWS which may be thought to create situations where responsibility gaps can still arise. The main purpose of the arguments is, however, not to show that responsibility gaps do not exist or can be closed where they do exist. Rather, it is to highlight that any arguments surrounding AWS must be made with reference to specific weapon platforms imbued with specific abilities, subject to specific limitations, and deployed to specific times and places for specific purposes. More succinctly, the arguments show that we cannot and should not aim to treat AWS as if all of these shared all morally relevant features, but instead on a case-by-case basis. Thus, we must contend with the realities of weapons development and deployment, and tailor our arguments and conclusions to those realities, and with an eye to what facts obtain for particular systems fulfilling particular combat roles.