MAG-Vision: A Vision Transformer Backbone for Magnetic Material Modeling

被引:0
|
作者
Zhang, Rui [1 ]
Shen, Lei [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310018, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Magnetic hysteresis; Magnetic cores; Training; Magnetic flux; Core loss; Complexity theory; Magnetic materials; Vectors; Saturation magnetization; deep learning; hysteresis loop; power magnetics; vision Transformer (ViT);
D O I
10.1109/TMAG.2025.3527486
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The neural network-based method for modeling magnetic materials enables the estimation of hysteresis B-H loop and core loss across a wide operation range. Transformers are neural networks widely used in sequence-to-sequence tasks. The classical Transformer modeling method suffers from high per-layer complexity and long recurrent inference time when dealing with long sequences. While down-sampling methods can mitigate these issues, they often sacrifice modeling accuracy. In this study, we propose MAG-Vision, which employs a vision Transformer (ViT) as the backbone for magnetic material modeling. It can shorten waveform sequences with minimal loss of information. We trained the network using the open-source magnetic core loss dataset MagNet. Experimental results demonstrate that MAG-Vision performs well in estimating hysteresis B-H loop and magnetic core losses. The average relative error of magnetic core losses for most materials is less than 2%. Experiments are designed to compare MAG-Vision with different network structures to validate its advantages in accuracy, training speed, and inference time.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Orthogonal Transformer: An Efficient Vision Transformer Backbone with Token Orthogonalization
    Huang, Huaibo
    Zhou, Xiaoqiang
    He, Ran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] RsViT - A Scalable Vision Transformer Backbone for Diffusion Model
    Sakpuaram, Thanawin
    Chantrapornchai, Chantana
    ADVANCED INFORMATION NETWORKING AND APPLICATIONS, VOL 3, AINA 2024, 2024, 201 : 302 - 312
  • [3] CONMW TRANSFORMER: A GENERAL VISION TRANSFORMER BACKBONE WITH MERGED-WINDOW ATTENTION
    Li, Ang
    Jiao, Jichao
    Li, Ning
    Qi, Wangjing
    Xu, Wei
    Pang, Min
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1551 - 1555
  • [4] Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention
    Wu, Sitong
    Wu, Tianyi
    Tan, Haoru
    Guo, Guodong
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2731 - 2739
  • [5] CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows
    Dong, Xiaoyi
    Bao, Jianmin
    Chen, Dongdong
    Zhang, Weiming
    Yu, Nenghai
    Yuan, Lu
    Chen, Dong
    Guo, Baining
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12114 - 12124
  • [6] Hierarchical Pretrained Backbone Vision Transformer for Image Classification in Histopathology
    Zedda, Luca
    Loddo, Andrea
    Di Ruberto, Cecilia
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2023, PT II, 2023, 14234 : 223 - 234
  • [7] Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
    Wang, Wenhai
    Xie, Enze
    Li, Xiang
    Fan, Deng-Ping
    Song, Kaitao
    Liang, Ding
    Lu, Tong
    Luo, Ping
    Shao, Ling
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 548 - 558
  • [8] Green Hierarchical Vision Transformer for Masked Image Modeling
    Huang, Lang
    You, Shan
    Zheng, Mingkai
    Wang, Fei
    Qian, Chen
    Yamasaki, Toshihiko
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Vision Transformer for Pansharpening
    Meng, Xiangchao
    Wang, Nan
    Shao, Feng
    Li, Shutao
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [10] Super Vision Transformer
    Lin, Mingbao
    Chen, Mengzhao
    Zhang, Yuxin
    Shen, Chunhua
    Ji, Rongrong
    Cao, Liujuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (12) : 3136 - 3151