The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem that has attracted extensive research efforts in developing exact methods and heuristics. However, the scalability and generalization of learning-based approaches remain significant challenges. This paper addresses this gap by proposing an approach that integrates supervised learning with traditional approaches. In particular, we introduce the concept of "anchors", which represents nodes that should be connected to their nearest neighbors in the optimal solution. Our approach differs from the previous supervised learning approaches in that, instead of using the whole distance matrix, our neural network leverages local information to make predictions, which enables it to handle arbitrarily large-scale TSP instances without a decline on prediction accuracy. Experimental results demonstrate that our model successfully identifies 87% of the anchors with a precision of over 95% for both generated instances and TSPLIB instances, which were unseen by the model during the training. The performance of the proposed framework is evaluated by integrating the predicted anchors into existing methods such as the Miller-Tucker-Zemlin (MTZ) model and the insertion algorithms. Our approach demonstrates remarkable improvements in solution quality and computational time. By comparing with the graph pointer network, a state-of-the-art learning-based approach, the proposed algorithm achieves a 28% reduction in the average gap and a 59% decrease in computational time for solving TSP instances with 1,000 nodes.