In the past few years, human-robot deception has been receiving growing attention in several fields (e.g., human-robot interaction, laws, philosophy, and psychology). While deception both in human-human and human-robot interactions may have positive consequences, it still presents philosophical and psychological controversy. In particular, verbal deceptions (i.e., in the form of lies or misleading information) may be judged as intentional behaviour at times. While intentionality has been recognised as fundamental in the development of trust, it is not yet fully clear which mechanisms can be designed to foster trust and the potential issues connected to deception. To this extent, in this study, we investigate whether the ability of mentalizing may be one of such mechanisms. We conducted a user study during a public fair, where participants played an assistive game with a robot endowed with Theory of Mind (ToM). We collected the responses from 37 participants to evaluate their perception of trust in the robot. During the game, the robot may occasionally have deceptive behaviours suggesting the wrong move to the human players. Our results showed that a deceptive robot was less trusted compared to a non-deceiving one. We also found that people's perception of the robot was positively affected by the frequency of exposure to deception (i.e., wrong suggestions).