Markovian Sliced Wasserstein Distances: Beyond Independent Projections

被引:0
|
作者
Khai Nguyen [1 ]
Ren, Tongzheng [2 ]
Nhat Ho [1 ]
机构
[1] Univ Texas Austin, Dept Stat & Data Sci, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sliced Wasserstein (SW) distance suffers from redundant projections due to independent uniform random projecting directions. To partially overcome the issue, max K sliced Wasserstein (Max-K-SW) distance (K >= 1), seeks the best discriminative orthogonal projecting directions. Despite being able to reduce the number of projections, the metricity of the Max-K-SW cannot be guaranteed in practice due to the non-optimality of the optimization. Moreover, the orthogonality constraint is also computationally expensive and might not be effective. To address the problem, we introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions. We discuss various members of the MSW by specifying the Markov structure including the prior distribution, the transition distribution, and the burning and thinning technique. Moreover, we investigate the theoretical properties of MSW including topological properties (metricity, weak convergence, and connection to other distances), statistical properties (sample complexity, and Monte Carlo estimation error), and computational properties (computational complexity and memory complexity). Finally, we compare MSW distances with previous SW variants in various applications such as gradient flows, color transfer, and deep generative modeling to demonstrate the favorable performance of the MSW1.
引用
收藏
页数:30
相关论文
共 50 条
  • [31] Orthogonal Estimation of Wasserstein Distances
    Rowland, Mark
    Hron, Jiri
    Tang, Yunhao
    Choromanski, Krzysztof
    Sarlos, Tamas
    Weller, Adrian
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 186 - 195
  • [32] Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation
    Lee, Chen-Yu
    Batra, Tanmay
    Baig, Mohammad Haris
    Ulbricht, Daniel
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10277 - 10287
  • [33] A Sliced Wasserstein Loss for Neural Texture Synthesis
    Heitz, Eric
    Vanhoey, Kenneth
    Chambon, Thomas
    Belcour, Laurent
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9407 - 9415
  • [34] Sliced Wasserstein Distance for Neural Style Transfer
    Li, Jie
    Xu, Dan
    Yao, Shaowen
    COMPUTERS & GRAPHICS-UK, 2022, 102 : 89 - 98
  • [35] Minimax confidence intervals for the Sliced Wasserstein distance
    Manole, Tudor
    Balakrishnan, Sivaraman
    Wasserman, Larry
    ELECTRONIC JOURNAL OF STATISTICS, 2022, 16 (01): : 2252 - 2345
  • [36] Generative Modeling using the Sliced Wasserstein Distance
    Deshpande, Ishan
    Zhang, Ziyu
    Schwing, Alexander
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3483 - 3491
  • [37] Wasserstein perturbations of Markovian transition semigroups
    Fuhrmann, Sven
    Kupper, Michael
    Nendel, Max
    ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2023, 59 (02): : 904 - 932
  • [38] Tropical optimal transport and Wasserstein distances
    Lee W.
    Li W.
    Lin B.
    Monod A.
    Information Geometry, 2022, 5 (1) : 247 - 287
  • [39] Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution
    Khai Nguyen
    Ho, Nhat
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [40] Sliced Wasserstein adversarial training for improving adversarial robustness
    Lee W.
    Lee S.
    Kim H.
    Lee J.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (08) : 3229 - 3242