The long-standing requirement that system and network designs must include accurate and complete adversary definitions from inception remains unmet on commodity platforms; e.g., on commodity operating systems, network protocols, and applications. A way to provide such definitions is to (1) partition commodity software into "wimps" (i.e., small software components with rather limited function and high-assurance security properties) and "giants" (i.e., large commodity software systems, with low/no assurance of security); and (2) limit the obligation of definining the adversary to wimps while realistically assuming that the giants are adversary controlled. We provide a structure for accurate and complete adversary definitions that yields basic security properties and metrics for wimps. Then we argue that wimps must collaborate ("dance") with giants, namely compose with adversary code across protection interfaces, and illustrate some of the salient features of the wimp-giant composition. We extend the wimp-giant metaphor to security protocols in networks of humans and computers where compelling services, possibly under the control of an adversary, are offered to unsuspecting users. Although these protocols have safe states whereby a participant can establish temporary beliefs in the adversary's trustworthiness, reasoning about such states requires techniques from other fields, such as behavioral economics, rather than traditional security and cryptography.