What if an uncanny robot nudges you to behave in some way? When design traits are considered, are there ethical implications hidden in the folds of robot-nudging? This paper aims to answer this last question, capitalizing on the literature on robot-nudging and the uncanny valley. We proceed as follows. First, we define the concept of "nudge" and outline the primary ethical concern associated with nudging, namely, the erosion of decision-making autonomy (1). Then we narrow our focus and consider cases where robots, instead of humans, are those who nudge, and discuss the still inconclusive research on the ethically relevant aspects of robot-nudging (2). The last and original part of the paper addresses one specific ethical aspect involved in employing uncanny robots in nudging (3). We hypothesize that certain design traits of robot-nudgers could easily compromise their capacity to nudge, eventually to the point of imposing ethically undesirable consequences.