Owens Luis - A Context-aware Multi-modal Smart Office Chair in an Ambient Environment

被引:0
|
作者
Kiyokawa, Kiyoshi [1 ,2 ]
Hatanaka, Masahide [2 ]
Hosoda, Kazufumi [2 ]
Okada, Masashi [2 ]
Shigeta, Hironori [2 ]
Ishihara, Yasunori [2 ]
Ooshita, Fukuhito [2 ]
Kakugawa, Hirotsugu [2 ]
Kurihara, Satoshi [2 ,3 ]
Moriyama, Koichi [2 ,3 ]
机构
[1] Osaka Univ, Cybermedia Ctr, Machikaneyama Cho 1-32, Toyonaka, Osaka 5600043, Japan
[2] Osaka Univ, Grad Sch Informat Sci & Technol, Suita, Osaka 5650871, Japan
[3] Osaka Univ, Inst Sci & Ind Res, Ibaraki, Osaka 5670047, Japan
关键词
Context awareness; Multi-modal displays; Sleepiness recognition;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces a smart office chair, Owens Luis, whose pronunciation has a meaning of "an encouraging chair" in Japanese. For most of the people, office environments are the place where they spend the longest time while awake. To improve the quality of life (QoL) in the office, Owens Luis monitors an office worker's mental and physiological states such as sleepiness and concentration, and controls the working environment by multi-modal displays including a motion chair, a variable color-temperature LED light and a hypersonic directional speaker.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Things that see: Context-aware multi-modal interaction
    Crowley, James L.
    COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, 3948 : 183 - 198
  • [2] CONTEXT-AWARE DEEP LEARNING FOR MULTI-MODAL DEPRESSION DETECTION
    Lam, Genevieve
    Huang Dongyan
    Lin, Weisi
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3946 - 3950
  • [3] Multi-Modal Context-Aware reasoNer (CAN) at the Edge of IoT
    Rahman, Hasibur
    Rahmani, Rahim
    Kanter, Theo
    8TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT-2017) AND THE 7TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY (SEIT 2017), 2017, 109 : 335 - 342
  • [4] SCATEAgent: Context-aware software agents for multi-modal travel
    Yin, M
    Griss, M
    APPLICATIONS OF AGENT TECHNOLOGY IN TRAFFIC AND TRANSPORTATION, 2005, : 69 - 84
  • [5] Adaptive Context-Aware Multi-Modal Network for Depth Completion
    Zhao, Shanshan
    Gong, Mingming
    Fu, Huan
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5264 - 5276
  • [6] Experiments with multi-modal interfaces in a context-aware city guide
    Bornträger, C
    Cheverst, K
    Davies, N
    Dix, A
    Friday, A
    Seitz, J
    HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES, 2003, 2795 : 116 - 130
  • [7] Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis
    Chauhan, Dushyant Singh
    Akhtar, Md Shad
    Ekbal, Asif
    Bhattacharyya, Pushpak
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5647 - 5657
  • [8] Hydra: A Personalized and Context-Aware Multi-Modal Transportation Recommendation System
    Liu, Hao
    Tong, Yongxin
    Zhang, Panpan
    Lu, Xinjiang
    Duan, Jianguo
    Xiong, Hui
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2314 - 2324
  • [9] A Context-Aware Smart Office Application System
    Wei, Hou Yuan
    Le, Hao Yong
    Shu, Xin
    Chen, Caiyuan
    INNOVATIVE MOBILE AND INTERNET SERVICES IN UBIQUITOUS COMPUTING, IMIS-2017, 2018, 612 : 959 - 966
  • [10] Context-aware multi-modal route selection service for urban computing scenarios
    Brito, Matheus
    Santos, Camilo
    Martins, Bruno S.
    Medeiros, Iago
    Seruffo, Marcos
    Cerqueira, Eduardo
    Rosario, Denis
    AD HOC NETWORKS, 2024, 161