Cognitive diagnosis models (CDMs) have been shown to provide detailed evaluations of students' achievement in terms of proficiency of individual cognitive attributes. Attribute hierarchy model (AHM), a variant of CDM, takes the hierarchical structure of those cognitive attributes to provide more accurate and interpretable measurements of learning achievement. However, advantages of the richer model come at the expense of increased difficulty in designing the hierarchy of the cognitive attributes and developing corresponding test sets. In this study, we propose quantitative tools for validating the hierarchical structures of cognitive attributes. First, a method to quantitatively compare alternative cognitive hierarchies is established by computing the inconsistency between a given cognitive hierarchy and students' responses. Then, this method is generalized to validate a cognitive hierarchy without real responses. Numerical simulations were performed starting from an AHM designed by experts and responses of elementary school students. Results show that the expert-designed cognitive attribute explains the students' responses better than most of alternative hierarchies do, but not all; a superior cognitive hierarchy is identified. This discrepancy is discussed in terms of internalization of cognitive attributes.