With the explosive development of mobile computing, federated learning (FL) has been considered as a promising distributed training framework for addressing the shortage of conventional cloud based centralized training. In FL, local model owners (LMOs) individually train their respective local models and then upload the trained local models to the task publisher (TP) for aggregation to obtain the global model. When the data provided by LMOs do not meet the requirements for model training, they can recruit workers to collect data. In this paper, by considering the interactions among the TP, LMOs and workers, we propose a three-layer hierarchical game framework. However, there are two challenges. Firstly, information asymmetry between workers and LMOs may result in that the workers hide their types. Secondly, incentive mismatch between TP and LMOs may result in a lack of LMOs' willingness to participate in FL. Therefore, we decompose the hierarchical-based framework into two layers to address these challenges. For the lower-layer, we leverage the contract theory to ensure truthful reporting of the workers' types, based on which we simplify the feasible conditions of the contract and design the optimal contract. For the upper-layer, the Stackelberg game is adopted to model the interactions between the TP and LMOs, and we derive the Nash equilibrium and Stackelberg equilibrium solutions. Moreover, we develop an iterative <underline>H</underline>ierarchical-based <underline>U</underline>tility <underline>M</underline>aximization <underline>A</underline>lgorithm (HUMA) to solve the coupling problem between upper-layer and lower-layer games. Extensive numerical experimental results verify the effectiveness of HUMA, and the comparison results illustrate the performance gain of HUMA. IEEE