An automatic generation method of cross-modal fuzzy creativity

被引:2
|
作者
Zhang, Fuquan [1 ]
Wang, Yiou [2 ]
Wu, Chensheng [2 ]
机构
[1] Minjiang Univ, Fujian Prov Key Lab Informat Proc & Intelligent C, Fuzhou, Peoples R China
[2] Beijing Inst Sci & Technol Informat, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Generation of fuzzy creativity; cross-modal; graph neural network; creative works;
D O I
10.3233/JIFS-179657
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Digital creativity is creative expression derived from cultural creativity and information technology. In order to overcome the problem in the creative generation in the condition of fuzzy and uncertain ideas, an automatic generation method of cross-modal fuzzy creativity (AGMCFC) is proposed. In this subject, fuzzy creative data sets and learning retrieval network are constructed for the sake of extracting original creative data effectively. And the logical correlations between creative objects are acquired dynamically based on the graph neural network. Creative objects and creative styles are generated by using generative adversarial nets technology and style transfer technology, respectively. Then, the projectiles, boundary markers and location words of the creative scene objects are generated by analyzing related attributes of each entity. After adjusting the layout, creative works are automatically generated. A fuzzy creative generating environment is implemented. Experimental results show that the screened number of AGMCFC method is about twice as much as that of manual method, and the accuracy rate of AGMCFC method is improved compared with the manual method. AGMCFC method performs well at creative generation of fuzzy ideas automatically.
引用
收藏
页码:5685 / 5696
页数:12
相关论文
共 50 条
  • [31] Automatic semantic modeling of structured data sources with cross-modal retrieval
    Xu, Ruiqing
    Mayer, Wolfgang
    Chu, Hailong
    Zhang, Yitao
    Zhang, Hong-Yu
    Wang, Yulong
    Liu, Youfa
    Feng, Zaiwen
    [J]. PATTERN RECOGNITION LETTERS, 2024, 177 : 7 - 14
  • [32] An Automatic Depression Detection Method with Cross-Modal Fusion Network and Multi-head Attention Mechanism
    Li, Yutong
    Wang, Juan
    Liu, Zhenyu
    Zhou, Li
    Zhang, Haibo
    Tang, Cheng
    Hu, Xiping
    Hu, Bin
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT V, 2024, 14429 : 252 - 264
  • [33] A Cross-Modal Generative Adversarial Network for Scenarios Generation of Renewable Energy
    Kang, Mingyu
    Zhu, Ran
    Chen, Duxin
    Li, Chaojie
    Gu, Wei
    Qian, Xusheng
    Yu, Wenwu
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) : 2630 - 2640
  • [34] Conditional Sentence Generation and Cross-Modal Reranking for Sign Language Translation
    Zhao, Jian
    Qi, Weizhen
    Zhou, Wengang
    Duan, Nan
    Zhou, Ming
    Li, Houqiang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2662 - 2672
  • [35] Infant cross-modal learning
    Chow, Hiu Mei
    Tsui, Angeline Sin-Mei
    Ma, Yuen Ki
    Yat, Mei Ying
    Tseng, Chia-huei
    [J]. I-PERCEPTION, 2014, 5 (04): : 463 - 463
  • [36] CROSS-MODAL JUDGMENTS OF LENGTH
    DAVIDON, RS
    MATHER, JH
    [J]. AMERICAN JOURNAL OF PSYCHOLOGY, 1966, 79 (03): : 409 - &
  • [37] CROSS-MODAL PERCEPTION IN APES
    DAVENPORT, RK
    [J]. ANNALS OF THE NEW YORK ACADEMY OF SCIENCES, 1976, 280 (OCT28) : 143 - 149
  • [38] Adversarial Cross-Modal Retrieval
    Wang, Bokun
    Yang, Yang
    Xu, Xing
    Hanjalic, Alan
    Shen, Heng Tao
    [J]. PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 154 - 162
  • [39] CROSS-MODAL NEGATIVE PRIMING
    YEE, PL
    SNYDER, H
    [J]. BULLETIN OF THE PSYCHONOMIC SOCIETY, 1992, 30 (06) : 476 - 476
  • [40] A General Cross-Modal Correlation Learning Method for Remote Sensing
    Lü Y.
    Xiong W.
    Zhang X.
    [J]. Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University, 2022, 47 (11): : 1887 - 1895