Cross-modal interaction between visual and olfactory learning in Apis cerana

被引:0
|
作者
Li-Zhen Zhang
Shao-Wu Zhang
Zi-Long Wang
Wei-Yu Yan
Zhi-Jiang Zeng
机构
[1] Jiangxi Agricultural University,Honeybee Research Institute
[2] Australian National University,Research School of Biology
来源
关键词
Cross-modal learning; Visual stimuli; Olfactory stimuli real-time RT-PCR;
D O I
暂无
中图分类号
学科分类号
摘要
The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°–3.8°) and relatively olfactory threshold (concentration of 50–25 %) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.
引用
收藏
页码:899 / 909
页数:10
相关论文
共 50 条
  • [21] HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
    Zhang, Chengyuan
    Song, Jiayu
    Zhu, Xiaofeng
    Zhu, Lei
    Zhang, Shichao
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [22] Natural cross-modal mappings between visual and auditory features
    Evans, Karla K.
    Treisman, Anne
    [J]. JOURNAL OF VISION, 2010, 10 (01): : 1 - 12
  • [23] Learning cross-modal interaction for RGB-T tracking
    Xu, Chunyan
    Cui, Zhen
    Wang, Chaoqun
    Zhou, Chuanwei
    Yang, Jian
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (01)
  • [24] Cross-modal interactions between visual brightness and image of consonants
    Hirata, Sachiko
    Ukita, Jun
    [J]. INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2008, 43 (3-4) : 456 - 456
  • [25] An incremental cross-modal transfer learning method for gesture interaction
    Zhong, Junpei
    Li, Jie
    Lotfi, Ahmad
    Liang, Peidong
    Yang, Chenguang
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 155
  • [26] Learning cross-modal interaction for RGB-T tracking
    Chunyan Xu
    Zhen Cui
    Chaoqun Wang
    Chuanwei Zhou
    Jian Yang
    [J]. Science China Information Sciences, 2023, 66
  • [27] Learning cross-modal interaction for RGB-T tracking
    Chunyan XU
    Zhen CUI
    Chaoqun WANG
    Chuanwei ZHOU
    Jian YANG
    [J]. Science China(Information Sciences), 2023, 66 (01) : 320 - 321
  • [28] Hybrid cross-modal interaction learning for multimodal sentiment analysis
    Fu, Yanping
    Zhang, Zhiyuan
    Yang, Ruidi
    Yao, Cuiyou
    [J]. NEUROCOMPUTING, 2024, 571
  • [29] Infant cross-modal learning
    Chow, Hiu Mei
    Tsui, Angeline Sin-Mei
    Ma, Yuen Ki
    Yat, Mei Ying
    Tseng, Chia-huei
    [J]. I-PERCEPTION, 2014, 5 (04): : 463 - 463
  • [30] Effects of Sequential Sensory Cues on Food Taste Perception: Cross-Modal Interplay Between Visual and Olfactory Stimuli
    Biswas, Dipayan
    Labrecque, Lauren I.
    Lehmann, Donald R.
    [J]. JOURNAL OF CONSUMER PSYCHOLOGY, 2021, 31 (04) : 746 - 764