Multi-aircraft collaborative batching method based on self-organizing clustering

被引:0
|
作者
Zhang S. [1 ]
Jin T. [2 ]
Zhang Y. [3 ]
Zhou R. [3 ]
Ran H. [4 ]
Zhou L. [4 ]
机构
[1] Shenyang Aircraft Design and Research Institute of AVIC, Shenyang
[2] China Academy of Aerospace Science and Innovation, Beijing
[3] School of Automation Science and Electrical Engineering, Beihang University, Beijing
[4] CETC Key Laboratory of Avionic Information System Technology, Chengdu
关键词
clustering; high-dimensional situation information; hyperparameter; multi-aircraft collaborative batching; self-organization;
D O I
10.13374/j.issn2095-9389.2023.10.09.002
中图分类号
学科分类号
摘要
This article addresses the bathing problem in multi-machine collaborative operations, proposing a method based on improved self-organizing iterative clustering. This approach circumvents the issues of traditional manual parameter setting in the self-organizing iterative clustering algorithm that is often inconvenient and non-intuitive. The proposed method allows multiple machines to autonomously adjust the parameters involved in the clustering process, given a small number of intuitive hyperparameters. The ultimate goal is to iterate toward reasonable editing results. Initially, this article focuses on selecting feature vectors for the multi-machine collaborative confrontation situation. It applies standardization and principal component analysis to high-dimensional multi-machine situation information to confirm the new vector space. This space mainly encompasses position information in three dimensions and speed information. Subsequently, the paper introduces the concept of neighborhood density discrimination from density clustering. This improves the merging and splitting operations of the traditional self-organizing iterative clustering method. It optimizes and reduces the artificial parameters involved in these operations, enhancing the intelligent autonomy for batch clustering tasks. Before optimization, artificial parameters primarily include the number of expected clusters, minimum number of points within a class, number of iterations, upper limit of standard deviation that limits data distribution within a class, and an allowable shortest distance indicator between classes. Post optimization, the artificial parameters are limited to the expected cluster quantity, minimum number of points, and the number of iterations within a single classification. These optimized parameters are relatively intuitive, and the algorithm output does not strongly correlate with the input parameters. Ultimately, the paper selects algorithm evaluation indicators, including Dunn, Davies–Bouldin, silhouette coefficient, and Calinski–Harabasz. It uses these to evaluate the proposed algorithms ISODATA+ and K-MEANS+, along with the original ISODATA algorithm, against multiple artificially synthesized data sets (completely random data, Gaussian-generated data, and sin-type data) and real-world scenarios. The experimental results suggest that while KMEANS+ shows significant advantages owing to multiple manually set hyperparameters, it requires constant debugging when adjusting parameters, which increases the complexity of the task. Compared with the original self-organizing iterative algorithm ISODATA, statistical results show that the improved algorithm has equivalent capabilities to the original algorithm. This demonstrates that the ISODATA+ algorithm maintains good clustering capabilities even after removing some artificial parameters. The batching results from actual scenario tests further illustrate the effectiveness of the improved self-organizing iterative clustering algorithm in specific application scenarios, demonstrating its practicability for future real-world applications. © 2024 Science Press. All rights reserved.
引用
收藏
页码:1269 / 1278
页数:9
相关论文
共 26 条
  • [1] James G, Witten D, Hastie T, Et al., An Introduction to Statistical Learning: with Applications in Python, (2023)
  • [2] Li F J, Wang J T, Qian Y H, Et al., Fuzzy ensemble clustering based on self-coassociation and prototype propagation, IEEE Trans Fuzzy Syst, 31, 10, (2023)
  • [3] Li Y, Fan B, Guo J, Et al., Attribute reduction method based on k-prototypes clustering and rough sets, Comput Sci, 48, (2021)
  • [4] Alguwaizani A., Degeneracy on K-means clustering, Electron Notes Discrete Math, 39, (2012)
  • [5] Campello R J G B, Kroger P, Sander J, Et al., Density-based clustering, WIREs Data Min Knowl, 10, 2, (2020)
  • [6] Fahim A., A varied density-based clustering algorithm, J Comput Sci, 66, (2023)
  • [7] An X Y, Wang Z M, Wang D, Et al., STRP-DBSCAN: A parallel DBSCAN algorithm based on spatial-temporal random partitioning for clustering trajectory data, Appl Sci, 13, 20, (2023)
  • [8] Huang Q R, Gao R, Akhavan H., An ensemble hierarchical clustering algorithm based on merits at cluster and partition levels, Pattern Recognit, 136, (2023)
  • [9] Dutta A K, Elhoseny M, Dahiya V, Et al., An efficient hierarchical clustering protocol for multihop Internet of vehicles communication, Trans Emerging Tel Tech, 31, 5, (2020)
  • [10] Karypis G, Han E H, Kumar V., Chameleon: Hierarchical clustering using dynamic modeling, Computer, 32, 8, (1999)