Markovian Sliced Wasserstein Distances: Beyond Independent Projections

被引:0
|
作者
Khai Nguyen [1 ]
Ren, Tongzheng [2 ]
Nhat Ho [1 ]
机构
[1] Univ Texas Austin, Dept Stat & Data Sci, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sliced Wasserstein (SW) distance suffers from redundant projections due to independent uniform random projecting directions. To partially overcome the issue, max K sliced Wasserstein (Max-K-SW) distance (K >= 1), seeks the best discriminative orthogonal projecting directions. Despite being able to reduce the number of projections, the metricity of the Max-K-SW cannot be guaranteed in practice due to the non-optimality of the optimization. Moreover, the orthogonality constraint is also computationally expensive and might not be effective. To address the problem, we introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions. We discuss various members of the MSW by specifying the Markov structure including the prior distribution, the transition distribution, and the burning and thinning technique. Moreover, we investigate the theoretical properties of MSW including topological properties (metricity, weak convergence, and connection to other distances), statistical properties (sample complexity, and Monte Carlo estimation error), and computational properties (computational complexity and memory complexity). Finally, we compare MSW distances with previous SW variants in various applications such as gradient flows, color transfer, and deep generative modeling to demonstrate the favorable performance of the MSW1.
引用
收藏
页数:30
相关论文
共 50 条
  • [41] On adaptive confidence sets for the Wasserstein distances
    Deo, Neil
    Randrianarisoa, Thibault
    BERNOULLI, 2023, 29 (03) : 2119 - 2141
  • [42] Minimax statistical learning with Wasserstein distances
    Lee, Jaeho
    Raginsky, Maxim
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [43] Wasserstein Distances for Stereo Disparity Estimation
    Garg, Divyansh
    Wang, Yan
    Hariharan, Bharath
    Campbell, Mark
    Weinberger, Kilian Q.
    Chao, Wei-Lun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [44] Sliced Wasserstein Distance for Learning Gaussian Mixture Models
    Kolouri, Soheil
    Rohde, Gustavo K.
    Hoffmann, Heiko
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3427 - 3436
  • [45] Fixed Support Tree-Sliced Wasserstein Barycenter
    Takezawa, Yuki
    Sato, Ryoma
    Kozareva, Zornitsa
    Ravi, Sujith
    Yamada, Makoto
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [46] APPROXIMATE BAYESIAN COMPUTATION WITH THE SLICED-WASSERSTEIN DISTANCE
    Nadjahi, Kimia
    De Bortoli, Valentin
    Durmus, Alain
    Badeau, Roland
    Simsekli, Umut
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 5470 - 5474
  • [47] Amortized Projection Optimization for Sliced Wasserstein Generative Models
    Nguyen, Khai
    Ho, Nhat
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [48] On efficient multilevel clustering via wasserstein distances
    Huynh, Viet
    Ho, Nhat
    Dam, Nhan
    Nguyen, XuanLong
    Yurochkin, Mikhail
    Bui, Hung
    Phung, Dinh
    Journal of Machine Learning Research, 2021, 22
  • [49] Wasserstein Distances, Geodesics and Barycenters of Merge Trees
    Pont, Mathieu
    Vidal, Jules
    Delon, Julie
    Tierny, Julien
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (01) : 291 - 301
  • [50] Inference for empirical Wasserstein distances on finite spaces
    Sommerfeld, Max
    Munk, Axel
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2018, 80 (01) : 219 - 238