Wireless networks have progressed exponentially over the last decade, and modern wireless networking is today a complex to manage tangle, serving an ever-growing number of end-devices through a plethora of technologies. The broad range of use cases supported by wireless networking requires the conception of smarter resource allocation approaches, which make the most of the scarce wireless resources. We address the problem of user association (UA) in wireless systems. We consider a particularly challenging setup for UA, represented by modern ad-hoc networks such as FANETS, where connectivity is provided by a group of unmanned aerial vehicles (UAVs). We introduce GROWS, a Deep Reinforcement Learning (DRL) driven approach to efficiently connect wireless users to the network, leveraging Graph Neural Networks (GNNs) to better model the function of expected rewards. While GROWS is not tied to any specific wireless technology, the decentralized nature of FANETS and the lack of a pre-existing infrastructure makes a perfect case study. We show that GROWS learns UA policies for FANETS which largely outperform currently used association heuristics, realizing up to 20% higher throughput utility while reducing user rejection by more than 90%, and that these policies are robust to concept drifts in the expected load of traffic, maintaining performance improvements for previously unseen traffic loads.