Exploring emotional aspects of travel concepts via travel photos based on contrastive language-image pretraining

被引:0
|
作者
Vu, Huy Quan [1 ]
Song, Baobao [2 ]
Li, Gang [3 ]
Law, Rob [4 ]
机构
[1] Deakin Univ, Deakin Business Sch, Burwood, Australia
[2] Univ Technol Sydney, Fac Engn & IT, Sydney, Australia
[3] Deakin Univ, Sch Informat Technol, Burwood, Australia
[4] Univ Macau, Fac Business Adm, Asia Pacific Acad Econ & Management, Taipa, Macau, Peoples R China
关键词
CLIP; OpenAI; Emotion wheel; Travel concept; Zero-shot classifier; LEARNING-MODEL; DESTINATION; TOURISM; EXPERIENCES;
D O I
10.1016/j.tourman.2024.105117
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Understanding travel concepts and their associated emotions is beneficial for promoting tourism destinations and attracting travelers seeking specific emotional experiences. However, few studies in tourism literature have explored the emotional aspects of travel concepts, likely due to the abstract nature of emotion, which makes capturing and analyzing emotions challenging. This paper introduces a novel method for the effective exploration of emotions associated with travel concepts through travel photos to address this gap. By leveraging a state-of-the-art computer vision technique, Contrastive Language-Image Pretraining, our method can uncover various travel concepts and their associated emotions. We demonstrate the method's effectiveness through a case study of Australia, analyzing 436,177 photos shared by 9457 users. The method and its findings are valuable for researchers and tourism managers, offering insights into the emotional aspects of travel concepts for use in tourism marketing and other applications.
引用
收藏
页数:16
相关论文
共 12 条
  • [1] CYCLIP: Cyclic Contrastive Language-Image Pretraining
    Goel, Shashank
    Bansal, Hritik
    Bhatia, Sumit
    Rossi, Ryan A.
    Vinay, Vishwa
    Grover, Aditya
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] RegionCLIP: Region-based Language-Image Pretraining
    Zhong, Yiwu
    Yang, Jianwei
    Zhang, Pengchuan
    Li, Chunyuan
    Codella, Noel
    Li, Liunian Harold
    Zhou, Luowei
    Dai, Xiyang
    Yuan, Lu
    Li, Yin
    Gao, Jianfeng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16772 - 16782
  • [3] MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining
    Dong, Xiaoyi
    Bao, Jianmin
    Zheng, Yinglin
    Zhang, Ting
    Chen, Dongdong
    Yang, Hao
    Zeng, Ming
    Zhang, Weiming
    Yuan, Lu
    Chen, Dong
    Wen, Fang
    Yu, Nenghai
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10995 - 11005
  • [4] Uncover the balanced geometry in long-tailed contrastive language-image pretraining
    Zhou, Zhihan
    Ye, Yushi
    Hong, Feng
    Zhao, Peisen
    Yao, Jiangchao
    Zhang, Ya
    Tian, Qi
    Wang, Yanfeng
    MACHINE LEARNING, 2025, 114 (04)
  • [5] Perceptual Image Quality Prediction: Are Contrastive Language-Image Pretraining (CLIP) Visual Features Effective?
    Onuoha, Chibuike
    Flaherty, Jean
    Cong Thang, Truong
    ELECTRONICS, 2024, 13 (04)
  • [6] Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data Quality over Quantity
    Joshi, Siddharth
    Jain, Arnav
    Payani, Ali
    Mirzasoleiman, Baharan
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [7] Evaluating Scoliosis Severity Based on Posturographic X-ray Images Using a Contrastive Language-Image Pretraining Model
    Fabijan, Artur
    Fabijan, Robert
    Zawadzka-Fabijan, Agnieszka
    Nowoslawska, Emilia
    Zakrzewski, Krzysztof
    Polis, Bartosz
    DIAGNOSTICS, 2023, 13 (13)
  • [8] Exploring the Use of Contrastive Language-Image Pre-Training for Human Posture Classification: Insights from Yoga Pose Analysis
    Dobrzycki, Andrzej D.
    Bernardos, Ana M.
    Bergesio, Luca
    Pomirski, Andrzej
    Saez-Trigueros, Daniel
    MATHEMATICS, 2024, 12 (01)
  • [9] Image-Text (IT)-Prompt: Prompt-Based Learning Framework Empowered by the Cluster-Based Nearest Class Mean (C-NCM) for Rehearsal-Free Contrastive Language-Image Pretraining (CLIP)-Based Continual Learning
    Jiao, Li
    Fu, Wenlong
    Chen, Xiaolu
    APPLIED SCIENCES-BASEL, 2025, 15 (06):
  • [10] Retrieval-Based Chest X-Ray Report Generation Using a Pre-trained Contrastive Language-Image Model
    Endo, Mark
    Krishnan, Rayan
    Krishna, Viswesh
    Ng, Andrew Y.
    Rajpurkar, Pranav
    MACHINE LEARNING FOR HEALTH, VOL 158, 2021, 158 : 209 - 219