For effective HRI, robots must go beyond having good legibility of their intentions shown by their actions, but also ground the degree of uncertainty they have. We show how in simple robots which have spoken language understanding capacities, uncertainty can be communicated to users by principles of grounding in dialogue interaction even without natural language generation. We present a model which makes this possible for robots with limited communication channels beyond the execution of task actions themselves. We implement our model in a pick-and-place robot, and experiment with two strategies for grounding uncertainty. In an observer study, we show that participants observing interactions with the robot run by the two different strategies were able to infer the degree of understanding the robot had internally, and in the more uncertainty-expressive system, were also able to perceive the degree of internal uncertainty the robot had reliably.