Joint Gated Co-Attention Based Multi-Modal Networks for Subregion House Price Prediction

被引:3
|
作者
Wang, Pengkun [1 ]
Ge, Chuancai [1 ]
Zhou, Zhengyang [1 ]
Wang, Xu [1 ]
Li, Yuantao [1 ]
Wang, Yang [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
关键词
Predictive models; Forecasting; Data models; Spatiotemporal phenomena; Analytical models; Correlation; Learning systems; Subregion house price prediction; multi-modal networks; heterogeneous data fusion; CONDITIONAL AUTOREGRESSIVE MODEL; DYNAMICS;
D O I
10.1109/TKDE.2021.3093881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Urban housing price is widely accepted as an economic indicator which is of both business and research interest in urban computing. However, due to the complex nature of influencing factors and the sparse property of transaction records, to implement such a model is still challenging. To address these challenges, in this work, we study an effective and fine-grained model for urban subregion housing price predictions. Compared to existing works, our proposal improves the forecasting granularity from city-level to mile-level, with only publicly released transaction data. We employ a feature selection mechanism to select more relevant features. Then, we propose an integrated model, JGC_MMN (Joint Gated Co-attention Based Multi-modal Network), to learn all-level features and capture spatiotemporal correlations in all-time stages with a modified densely connected convolutional network as well as current ingredients and future expectations. Next, we devise a novel JGC based fusion method to better fuse the heterogeneous data of multi-stage models by considering their interactions in temporal dimension. Finally, extensive empirical studies on real datasets demonstrate the effectiveness of our proposal, and this fine-grained housing price forecasting has the potential to support a broad scope of applications, ranging from urban planning to housing market recommendations.
引用
收藏
页码:1667 / 1680
页数:14
相关论文
共 50 条
  • [1] A co-attention based multi-modal fusion network for review helpfulness prediction
    Ren, Gang
    Diao, Lei
    Guo, Fanjia
    Hong, Taeho
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (01)
  • [2] Multi-modal co-attention relation networks for visual question answering
    Zihan Guo
    Dezhi Han
    [J]. The Visual Computer, 2023, 39 : 5783 - 5795
  • [3] Multi-modal co-attention relation networks for visual question answering
    Guo, Zihan
    Han, Dezhi
    [J]. VISUAL COMPUTER, 2023, 39 (11): : 5783 - 5795
  • [4] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    Yin, Chunyan
    Chen, Yongheng
    [J]. OPTICAL MEMORY AND NEURAL NETWORKS, 2024, 33 (01) : 13 - 27
  • [5] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    [J]. Optical Memory and Neural Networks, 2024, 33 : 13 - 27
  • [6] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    Chunyan Yin
    Yongheng Chen
    [J]. Optical Memory and Neural Networks (Information Optics), 2024, 33 (01): : 13 - 27
  • [7] CoaDTI: multi-modal co-attention based framework for drug-target interaction annotation
    Huang, Lei
    Lin, Jiecong
    Liu, Rui
    Zheng, Zetian
    Meng, Lingkuan
    Chen, Xingjian
    Li, Xiangtao
    Wong, Ka-Chun
    [J]. BRIEFINGS IN BIOINFORMATICS, 2022, 23 (06)
  • [8] Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering
    Yu, Zhou
    Yu, Jun
    Fan, Jianping
    Tao, Dacheng
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1839 - 1848
  • [9] Multi-Pointer Co-Attention Networks for Recommendation
    Tay, Yi
    Luu Anh Tuan
    Siu Cheung Hui
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2309 - 2318
  • [10] Automatic depression prediction via cross-modal attention-based multi-modal fusion in social networks
    Wang, Lidong
    Zhang, Yin
    Zhou, Bin
    Cao, Shihua
    Hu, Keyong
    Tan, Yunfei
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2024, 118