Mobile edge computing (MEC) has emerged as a key solution for addressing the demands of computation-intensive network services by providing computational resources at the network edge, thereby minimizing service delays. Leveraging their flexible deployment, wide coverage, and reliable wireless communication, unmanned aerial vehicles (UAVs) have been integrated into MEC systems to enhance performance. This paper investigates the task offloading problem in a Multi-UAV-assisted MEC environment and proposes a collaborative optimization framework that integrates the Distance to Task Location and Capability Match (DTLCM) mechanism with a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. Unlike traditional task priority-based offloading schemes, the proposed approach ensures optimal UAV selection based on both computational capability and spatial proximity. The system gain is defined in terms of energy efficiency and task delay with the optimization formulated as a mixed-integer programming problem. To efficiently solve this complex problem, a Multi-Agent Deep Reinforcement Learning framework is employed, combining MADDPG with DTLCM to jointly optimize UAV trajectories, task offloading decisions, computational resource allocation, and communication resource management. Comprehensive simulations demonstrate that the proposed MADDPG-DTLCM framework significantly outperforms four state-of-the-art methods (MADDPG-DTLCM,MADQN, MADDPG without DTLCM, and Greedy offloading), achieving 18% higher task completion rates and 12% lower latency under varying network conditions, particularly in high-user-density scenarios with UAV collaboration.