Vision Transformer-based pilot pose estimation

被引:0
|
作者
Wu, Honglan [1 ]
Liu, Hao [1 ]
Sun, Youchao [1 ]
机构
[1] College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing,211106, China
关键词
Convolutional neural networks;
D O I
10.13700/j.bh.1001-5965.2022.0811
中图分类号
学科分类号
摘要
Human pose estimation is an important aspect in the field of behavioral perception and a key technology in the way of intelligent interaction in the cockpit of civil aircraft. To establish an explainable link between the complex lighting environment in the cockpit of civil aircraft and the performance of the pilot pose estimation model, the visual Transformer-based pilot pose (ViTPPose) estimation model is proposed. In order to capture the global correlation of subsequent higher-order features while expanding the perceptual field, this model employs a two-branch Transformer module with several coding layers at the end of the convolutional neural networks (CNN)backbone network. The coding layers combine the Transformer and the dilated convolution. Based on the flight crew’s standard operating procedures, a pilot maneuvering behavior keypoint detection dataset is established for flight simulation scenarios. ViTPPose estimation model completes the pilot seating estimation on this dataset and verifies its validity by comparing it with the benchmark model. The seating estimation heatmap is created in the context of the cockpit’s complicated lighting to examine the model’s preferred lighting intensity, evaluate the ViTPPose estimation model’s performance under various lighting conditions, and highlight the model’s reliance on various lighting intensities. © 2024 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:3100 / 3110
相关论文
共 50 条
  • [1] Transformer-based rapid human pose estimation network
    Wang, Dong
    Xie, Wenjun
    Cai, Youcheng
    Li, Xinjie
    Liu, Xiaoping
    [J]. COMPUTERS & GRAPHICS-UK, 2023, 116 : 317 - 326
  • [2] AiPE: A Novel Transformer-Based Pose Estimation Method
    Lu, Kai
    Min, Dugki
    [J]. ELECTRONICS, 2024, 13 (05)
  • [3] VHR-BirdPose: Vision Transformer-Based HRNet for Bird Pose Estimation with Attention Mechanism
    He, Runang
    Wang, Xiaomin
    Chen, Huazhen
    Liu, Chang
    [J]. ELECTRONICS, 2023, 12 (17)
  • [4] A Transformer-Based Network for Full Object Pose Estimation with Depth Refinement
    Abdulsalam, Mahmoud
    Ahiska, Kenan
    Aouf, Nabil
    [J]. ADVANCED INTELLIGENT SYSTEMS, 2024,
  • [5] Pruning-guided feature distillation for an efficient transformer-based pose estimation model
    Kim, Dong-hwi
    Lee, Dong-hun
    Kim, Aro
    Jeong, Jinwoo
    Lee, Jong Taek
    Kim, Sungjei
    Park, Sang-hyo
    [J]. IET COMPUTER VISION, 2024,
  • [6] Transformer-based 3D Human pose estimation and action achievement evaluation
    Yang, Aolei
    Zhou, Yinghong
    Yang, Banghua
    Xu, Yulin
    [J]. Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2024, 45 (04): : 136 - 144
  • [7] CWPR: An optimized transformer-based model for construction worker pose estimation on construction robots
    Zhou, Jiakai
    Zhou, Wanlin
    Wang, Yang
    [J]. Advanced Engineering Informatics, 2024, 62
  • [8] Unsupervised Pose Estimation by Means of an Innovative Vision Transformer
    Brandizzi, Nicolo'
    Fanti, Andrea
    Gallotta, Roberto
    Russo, Samuele
    Iocchi, Luca
    Nardi, Daniele
    Napoli, Christian
    [J]. ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2022, PT II, 2023, 13589 : 3 - 20
  • [9] Vision Transformer-Based Tailing Detection in Videos
    Lee, Jaewoo
    Lee, Sungjun
    Cho, Wonki
    Siddiqui, Zahid Ali
    Park, Unsang
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (24):
  • [10] Vision Transformer-Based Photovoltaic Prediction Model
    Kang, Zaohui
    Xue, Jizhong
    Lai, Chun Sing
    Wang, Yu
    Yuan, Haoliang
    Xu, Fangyuan
    [J]. ENERGIES, 2023, 16 (12)