Artificial Intelligence (AI) based techniques are typically used to model decision making in terms of strategies and mechanisms that can result in optimal payoffs for a number of interacting entities, often presenting antagonistic behaviors. In this paper, we propose an AI-enabled multi-access edge computing (MEC) framework, supported by computing-equipped Unmanned Aerial Vehicles (UAVs) to facilitate IoT applications. Initially, the problem of determining the IoT nodes optimal data offloading strategies to the UAV-mounted MEC servers, while accounting for the IoT nodes' communication and computation overhead, is formulated based on a game-theoretic model. The existence of at least one Pure Nash Equilibrium (PNE) point is shown by proving that the game is submodular. Furthermore, different operation points (i.e. offloading strategies) are obtained and studied, based either on the outcome of Best Response Dynamics (BRD) algorithm, or via alternative reinforcement learning approaches (i.e. gradient ascent, log-linear, and Q-learning algorithms), which explore and learn the environment towards determining the users' stable data offloading strategies. The corresponding outcomes and inherent features of these approaches are critically compared against each other, via modeling and simulation.