The practical potential of Large Language Models (LLMs) depends in part on their ability to accurately interpret pragmatic functions. In this article, we assess ChatGPT 3.5's ability to identify and interpret linguistic impoliteness across a series of text examples. We provided ChatGPT 3.5 with instances of implicational, metalinguistic, and explicit impoliteness, alongside sarcasm, unpalatable questions, erotic talk, and unmarked impolite linguistic behavior, asking (i) whether impoliteness was present, and (ii) its source. We then further tested the bot's ability to identify impoliteness by asking it to remove it from a series of text examples. ChatGPT 3.5 generally performed well, recognizing both conventionalized lexicogrammatical forms and context-sensitive cases. However, it struggled to account for all impoliteness. In some cases, the model was more sensitive to potentially offensive expressions than humans are, as a result of its design, training and/or inability to sufficiently determine the situational context of the examples. We also found that the model had difficulties sometimes in interpreting impoliteness generated through implicature. Given that impoliteness is a complex and multi-functional phenomenon, we consider our findings to contribute to increasing public awareness not only about the use of AI technologies but also about improving their safety, transparency, and reliability. (c) 2025 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).