Bag of Deep Features for Instructor Activity Recognition in Lecture Room

被引:2
|
作者
Nida, Nudrat [1 ]
Yousaf, Muhammad Haroon [1 ]
Irtaza, Aun [2 ]
Velastin, Sergio A. [3 ,4 ,5 ]
机构
[1] Univ Engn & Technol, Dept Comp Engn, Taxila, Pakistan
[2] Univ Engn & Technol, Dept Comp Sci, Taxila, Pakistan
[3] Univ Carlos III Madrid, Dept Comp Sci, Appl Artificial Intelligence Res Grp, Madrid 28270, Spain
[4] Cortex Vis Syst Ltd, London SE1 9LQ, England
[5] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
来源
关键词
Human activity recognition; Instructor activity recognition; Motion templates; Academic quality assurance;
D O I
10.1007/978-3-030-05716-9_39
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This research aims to explore contextual visual information in the lecture room, to assist an instructor to articulate the effectiveness of the delivered lecture. The objective is to enable a self-evaluation mechanism for the instructor to improve lecture productivity by understanding their activities. Teacher's effectiveness has a remarkable impact on uplifting students performance to make them succeed academically and professionally. Therefore, the process of lecture evaluation can significantly contribute to improve academic quality and governance. In this paper, we propose a vision-based framework to recognize the activities of the instructor for self-evaluation of the delivered lectures. The proposed approach uses motion templates of instructor activities and describes them through a Bag-of-Deep features (BoDF) representation. Deep spatio-temporal features extracted from motion templates are utilized to compile a visual vocabulary. The visual vocabulary for instructor activity recognition is quantized to optimize the learning model. A Support Vector Machine classifier is used to generate the model and predict the instructor activities. We evaluated the proposed scheme on a self-captured lecture room dataset, IAVID-1. Eight instructor activities: pointing towards the student, pointing towards board or screen, idle, interacting, sitting, walking, using a mobile phone and using a laptop, are recognized with an 85.41% accuracy. As a result, the proposed framework enables instructor activity recognition without human intervention.
引用
收藏
页码:481 / 492
页数:12
相关论文
共 50 条
  • [1] Instructor Activity Recognition through Deep Spatiotemporal Features and Feedforward Extreme Learning Machines
    Nida, Nudrat
    Yousaf, Muhammad Haroon
    Irtaza, Aun
    Velastin, Sergio A.
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2019, 2019
  • [2] BoFF: A bag of fuzzy deep features for texture recognition
    Florindo, Joao B.
    Laureano, Estevao Esmi
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 219
  • [3] Bag of Features vs Deep Neural Networks for Face Recognition
    Tomodan, Eliza Rebeca
    Caleanu, Catalin Daniel
    2018 13TH INTERNATIONAL SYMPOSIUM ON ELECTRONICS AND TELECOMMUNICATIONS (ISETC), 2018, : 89 - 92
  • [4] Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine
    Bhatti, Yusra Khalid
    Jamil, Afshan
    Nida, Nudrat
    Yousaf, Muhammad Haroon
    Viriri, Serestina
    Velastin, Sergio A.
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2021, 2021
  • [5] Scale coding bag of deep features for human attribute and action recognition
    Fahad Shahbaz Khan
    Joost van de Weijer
    Rao Muhammad Anwer
    Andrew D. Bagdanov
    Michael Felsberg
    Jorma Laaksonen
    Machine Vision and Applications, 2018, 29 : 55 - 71
  • [6] Scale coding bag of deep features for human attribute and action recognition
    Khan, Fahad Shahbaz
    van de Weijer, Joost
    Anwer, Rao Muhammad
    Bagdanov, Andrew D.
    Felsberg, Michael
    Laaksonen, Jorma
    MACHINE VISION AND APPLICATIONS, 2018, 29 (01) : 55 - 71
  • [7] Deep Neural Networks vs Bag of Features for Hand Gesture Recognition
    Mirsu, Radu
    Simion, Georgiana
    Caleanu, Catlin Daniel
    Ursulescu, Oana
    Calimanu, Ioana Pop
    2019 42ND INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), 2019, : 369 - 372
  • [8] Personification of Bag-of-features Dataset for Real Time Activity Recognition
    Gadebe, Moses L.
    Kogeda, Okuthe P.
    2016 3RD INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2016), 2016, : 73 - 78
  • [9] Micro Actions and Deep Static Features for Activity Recognition
    Ramasinghe, Sameera
    Rajasegaran, Jathushan
    Jayasundara, Vinoj
    Ranasinghe, Kanchana
    Rodrigo, Ranga
    Pasqual, Ajith
    2017 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING - TECHNIQUES AND APPLICATIONS (DICTA), 2017, : 90 - 97
  • [10] Learning Deep and Shallow Features for Human Activity Recognition
    Sani, Sadiq
    Massie, Stewart
    Wiratunga, Nirmalie
    Cooper, Kay
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2017): 10TH INTERNATIONAL CONFERENCE, KSEM 2017, MELBOURNE, VIC, AUSTRALIA, AUGUST 19-20, 2017, PROCEEDINGS, 2017, 10412 : 469 - 482