The class-imbalance problem emerges when the class labels of a dataset have a skewed distribution. In this circumstance, the instances belonging to one class, which is exactly the principal purpose, are dominated thoroughly by the instances belonging to other classes. In recent years, feature selection for high-dimensional imbalanced data has become attraction research scope. This technique concerns selecting an informative feature set to improve the accuracy of the classification model. Moreover, as a subcategory of feature selection, the feature ranking technique has been deliberated to cope with high-dimensional datasets in the last decade. On the one hand, most traditional feature selection methods are not scalable, which is critical to cope with large-scale datasets. On the other hand, scalability is an intrinsic characteristic of the ensemble learning approach. This paper proposes a Distributed Ensemble Imbalanced feature selection framework, called DEIM, to deal with big imbalanced datasets. The DEIM, at first, transforms default data partitions to representative partitions in a single pass. Second, it applies a feature ranking method in a bagging approach upon each partition independently. Finally, It fuses intermediate feature rankings in a stacking strategy. In this paper, two traditional feature ranking algorithms, ReliefF and QPFS, are plugged into DEIM. Therefore, two methods DEIM-Relief and DEIM-QPFS, are produced. Experiments are accomplished on three big imbalanced datasets and upon a computer cluster. The empirical study depicts that the produced methods are scalable. Also, they have lower execution times, and their final results can induce better classification models than DiReliefF and DQPFS.