Toward Using Multi-Modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality

被引:0
|
作者
Yao, Powen [1 ]
Hou, Yu [1 ]
He, Yuan [1 ]
Cheng, Da [1 ]
Hu, Huanpu [1 ]
Zyda, Michael [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
关键词
Human-centered computing; Human computer interaction (HCI); Interaction paradigms; Virtual reality;
D O I
10.1109/VRW55335.2022.00195
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.
引用
收藏
页码:679 / 680
页数:2
相关论文
共 50 条
  • [1] Using Multi-modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality
    Yao, Powen
    Hou, Yu
    He, Yuan
    Cheng, Da
    Hu, Huanpu
    Zyda, Michael
    [J]. VIRTUAL, AUGMENTED AND MIXED REALITY: DESIGN AND DEVELOPMENT, PT I, 2022, 13317 : 94 - 112
  • [2] Dynamical User Intention Prediction via Multi-modal Learning
    Liu, Xuanwu
    Li, Zhao
    Mao, Yuanhui
    Lai, Lixiang
    Gao, Ben
    Deng, Yao
    Yu, Guoxian
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT I, 2020, 12112 : 519 - 535
  • [3] An Unsupervised User Behavior Prediction Algorithm Based on Machine Learning and Neural Network For Smart Home
    Liang, Tiankai
    Zeng, Bi
    Liu, Jianqi
    Ye, Linfeng
    Zou, Caifeng
    [J]. IEEE ACCESS, 2018, 6 : 49237 - 49247
  • [4] Multi-modal Hate Speech Detection using Machine Learning
    Boishakhi, Fariha Tahosin
    Shill, Ponkoj Chandra
    Alam, Md Golam Rabiul
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 4496 - 4499
  • [5] Workshop: Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality (MASSXR)
    Yumak, Zerrin
    Durnpinar, Funda
    Celiktutan, Oya
    Cesar, Pablo
    Bera, Aniket
    Gonzalez-Franco, Mar
    [J]. 2023 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW, 2023, : 353 - 354
  • [6] Machine learning based multi-modal prediction of future decline toward Alzheimer's disease: An empirical study
    Karaman, Batuhan K.
    Mormino, Elizabeth C.
    Sabuncu, Mert R.
    [J]. PLOS ONE, 2022, 17 (11):
  • [7] Prediction of cardiovascular diseases by integrating multi-modal features with machine learning methods
    Li, Pengpai
    Hu, Yongmei
    Liu, Zhi-Ping
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 66
  • [8] Classifying Imbalanced Multi-modal Sensor Data for Human Activity Recognition in a Smart Home using Deep Learning
    Alani, Ali A.
    Cosma, Georgina
    Taherkhani, Aboozar
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [9] A Multi-modal Data Platform for Diagnosis and Prediction of Alzheimer’s Disease Using Machine Learning Methods
    Zhen Pang
    Xiang Wang
    Xulong Wang
    Jun Qi
    Zhong Zhao
    Yuan Gao
    Yun Yang
    Po Yang
    [J]. Mobile Networks and Applications, 2021, 26 : 2341 - 2352
  • [10] A Multi-modal Data Platform for Diagnosis and Prediction of Alzheimer's Disease Using Machine Learning Methods
    Pang, Zhen
    Wang, Xiang
    Wang, Xulong
    Qi, Jun
    Zhao, Zhong
    Gao, Yuan
    Yang, Yun
    Yang, Po
    [J]. MOBILE NETWORKS & APPLICATIONS, 2021, 26 (06): : 2341 - 2352