We discuss inference for graphical models as a multiple comparison problem. We argue that posterior inference under a suitable hierarchical model can adjust for the multiplicity problem that arises by deciding inclusion for each of many possible edges. We show that inference under that hierarchical model differs substantially from inference under a comparable non-hierarchical model. With increasing size of the graph the difference between posterior distributions under the two models, as measured by Kullback-Liebler (KL) divergence, increases. We discuss several stylized inference problems, including estimation of one graph, comparison of a pair of graphs, estimation of a pair of graphs and, finally, estimation for multiple graphs. Throughout the discussion we assume that the graph is used to identify a conditional independence structure, that is, the graph represents a Markov random field. Model construction starts with a prior model for the random graph, conditional on which a sampling model is proposed for the observed data. There are no constraints on the nature of the sampling model. Most of the discussion is general and remains valid for any sampling model, subject to some technical constraints only. The discussion is motivated by two case studies. The first application is to model single cell mass spectrometry data for inference about the joint distribution of a set of markers that are recorded for each cell. Another application is to model Reverse Phase Protein Arrays (RPPA) protein expression data for inference about changes of underlying biomolecular pathways across three biologic conditions of interest. (C) 2017 Statistical Society of Canada