Exploring proper way to conduct multi-speech feature fusion for cross-corpus speech emotion recognition is crucial as different audio features could provide complementary cues reflecting human emotion status. Speech emotion recognition allows computers to analyze the specific emotional condition of the speaker through speech, which is of great significance to the development of human-computer interaction technology. While most previous approaches only extract a single speech feature for emotion recognition, existing fusion methods such as concatenation, parallel connection, and splicing ignore heterogeneous patterns in the interaction between features and features, resulting in performance of existing systems. In this paper, we propose a novel graph-based fusion method to explicitly model the relationships between every pair of audio features, which provides a new research idea for speech feature fusion. Specifically, we propose a multi-dimensional edge features learning strategy called graph-based multi-feature fusion method for speech emotion recognition. It represents each speech feature as a node and learns multi-dimensional edge features to explicitly describe the relationship between each feature-feature pair in the context of emotion recognition. This way, the learned multi-dimensional edge features encode speech feature-level information from both the vertex and edge dimensions. Our approach consists of three modules: an Audio Feature Generation (AFG) module, an Audio-Feature Multi-dimensional Edge Feature (AMEF) module and a Speech Emotion Recognition (SER) module. The proposed methodology yielded satisfactory outcomes on the SEWA dataset. Furthermore, the method demonstrated enhanced performance compared to the baseline in the AVEC 2019 Workshop and Challenge. We used data from two cultures as our training and validation sets: two cultures containing German and Hungarian on the SEWA dataset, the CCC scores for German are improved by 17.28% for arousal and 7.93% for liking, and for Hungarian, the CCC scores are improved by 11.15% for arousal and 131.11% for valence. The outcomes of our methodology demonstrate a 13% improvement over alternative fusion techniques, including those employing one-dimensional edge-based feature fusion approach. The experiments on some parts of the Aff-Wild 2 dataset demonstrate that our approach exhibits a certain degree of generalizability and robustness. Code is available at https://github.com/ChaosWang666/Graph-based-multi-Feature-fusion-method. © 2024 World Scientific Publishing Company.