MAG-Vision: A Vision Transformer Backbone for Magnetic Material Modeling

被引:0
|
作者
Zhang, Rui [1 ]
Shen, Lei [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310018, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Magnetic hysteresis; Magnetic cores; Training; Magnetic flux; Core loss; Complexity theory; Magnetic materials; Vectors; Saturation magnetization; deep learning; hysteresis loop; power magnetics; vision Transformer (ViT);
D O I
10.1109/TMAG.2025.3527486
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The neural network-based method for modeling magnetic materials enables the estimation of hysteresis B-H loop and core loss across a wide operation range. Transformers are neural networks widely used in sequence-to-sequence tasks. The classical Transformer modeling method suffers from high per-layer complexity and long recurrent inference time when dealing with long sequences. While down-sampling methods can mitigate these issues, they often sacrifice modeling accuracy. In this study, we propose MAG-Vision, which employs a vision Transformer (ViT) as the backbone for magnetic material modeling. It can shorten waveform sequences with minimal loss of information. We trained the network using the open-source magnetic core loss dataset MagNet. Experimental results demonstrate that MAG-Vision performs well in estimating hysteresis B-H loop and magnetic core losses. The average relative error of magnetic core losses for most materials is less than 2%. Experiments are designed to compare MAG-Vision with different network structures to validate its advantages in accuracy, training speed, and inference time.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] LC vision: The application to magnetic material domains investigations
    Tomilin, MG
    Kuznetsov, PA
    Galyametdinov, YG
    MOLECULAR CRYSTALS AND LIQUID CRYSTALS, 2005, 438 : 1655 - 1664
  • [22] Survey of Vision Transformer in Low-Level Computer Vision
    Zhu, Kai
    Li, Li
    Zhang, Tong
    Jiang, Sheng
    Bie, Yiming
    Computer Engineering and Applications, 2024, 60 (04) : 39 - 56
  • [23] Building Extraction With Vision Transformer
    Wang, Libo
    Fang, Shenghui
    Meng, Xiaoliang
    Li, Rui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [24] ViTT: Vision Transformer Tracker
    Zhu, Xiaoning
    Jia, Yannan
    Jian, Sun
    Gu, Lize
    Pu, Zhang
    SENSORS, 2021, 21 (16)
  • [25] Vision Transformer with Progressive Sampling
    Yue, Xiaoyu
    Sun, Shuyang
    Kuang, Zhanghui
    Wei, Meng
    Torr, Philip
    Zhang, Wayne
    Lin, Dahua
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 377 - 386
  • [26] Vision Transformer With Quadrangle Attention
    Zhang, Qiming
    Zhang, Jing
    Xu, Yufei
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3608 - 3624
  • [27] CONTINUAL LEARNING IN VISION TRANSFORMER
    Takeda, Mana
    Yanai, Keiji
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 616 - 620
  • [28] Vision Transformer with Deformable Attention
    Xia, Zhuofan
    Pan, Xuran
    Song, Shiji
    Li, Li Erran
    Huang, Gao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4784 - 4793
  • [29] On the Faithfulness of Vision Transformer Explanations
    Wu, Junyi
    Kang, Weitai
    Tang, Hao
    Hong, Yuan
    Yan, Yan
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 10936 - 10945
  • [30] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826