Comparing Evaluation Protocols on the KTH Dataset

被引:0
|
作者
Gao, Zan [1 ]
Chen, Ming-yu [2 ]
Hauptmann, Alexander G. [2 ]
Cai, Anni [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Informat & Commun Engn, Beijing 100876, Peoples R China
[2] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
来源
HUMAN BEHAVIOR UNDERSTANDING | 2010年 / 6219卷
基金
美国国家科学基金会;
关键词
Action Recognition; training/test data sets; partitioning; experimental methods; RECOGNITION; DENSE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human action recognition has become a hot research topic, and a lot of algorithms have been proposed Most of researchers evaluated their performances on the KTH dataset. but there is no unified standard how to evaluate algorithms on this dataset Different researchers have employed different test setups, so the comparison is not accurate. lair or complete In order to know how much difference there is when different experimental setups are used, we take our own spatiotemporal MoSIFT feature as an example to assess its performance on the KTH dataset using different test scenarios and different partitioning of the data In all experiments, support vector machine (SVM) with a chi-square kernel is adopted First, we evaluate performance changes resulting from differing vocabulary sizes of the codebook, and then decide on a suitable vocabulary size of codebook Then, we train the models using different training dataset partitions, and test the performances one the corresponding held-out test sets Experiments show that the best performance of MoSIF7 can reach 96 33% on the KTH dataset When different n-fold cross-validation methods are used, there can be up to 10 67% difference in the result And when different dataset segmentations are used (such as KTH1 and KTH2), the difference in results can be up to 5 8% absolute In addition, the performance changes dramatically when different scenarios are used in the training and test dataset When training on KTH1 S1+S2+S3+S4 and testing on KTH I SI and S3 scenarios, the performance can reach 97 33% and 89 33% respectively This paper shows how different test configurations can skew results, even on standard data set The recommendation is to use a simple leave-one-out as the most easily replicable clear-cut partitioning
引用
收藏
页码:88 / +
页数:3
相关论文
共 50 条
  • [21] A simulation study comparing the performance of two RFID protocols
    Nanjundaiah, Mamatha
    Chaudhary, Vipin
    UBIQUITOUS INTELLIGENCE AND COMPUTING, PROCEEDINGS, 2006, 4159 : 679 - 687
  • [22] Comparing underwater MAC protocols in real sea experiments
    Pu, Lina
    Luo, Yu
    Mo, Haining
    Le, Son
    Peng, Zheng
    Cui, Jun-Hong
    Jiang, Zaihan
    COMPUTER COMMUNICATIONS, 2015, 56 : 47 - 59
  • [23] Qutrit quantum battery: Comparing different charging protocols
    Gemme, Giulia
    Grossi, Michele
    Vallecorsa, Sofia
    Sassetti, Maura
    Ferraro, Dario
    PHYSICAL REVIEW RESEARCH, 2024, 6 (02):
  • [24] Comparing multicast protocols in mobile ad hoc networks
    Durst, RC
    Scott, K
    Zukoski, MJ
    Raghavendra, CS
    2001 IEEE AEROSPACE CONFERENCE PROCEEDINGS, VOLS 1-7, 2001, : 1051 - 1063
  • [25] Comparing SOAP performance for various encodings, protocols, and connections
    Kangasharju, J
    Tarkoma, S
    Raatikainen, K
    PERSONAL WIRELESS COMMUNICATIONS, PROCEEDINGS, 2003, 2775 : 397 - 406
  • [26] COMPARING 4 IP BASED MOBILE HOST PROTOCOLS
    MYLES, A
    SKELLERN, D
    COMPUTER NETWORKS AND ISDN SYSTEMS, 1993, 26 (03): : 349 - 355
  • [27] Second Generation Antivenomics: Comparing Immunoaffinity and Immunodepletion Protocols
    Pla, Davinia
    Gutierrez, Jose M.
    Calvete, Juan J.
    TOXICON, 2012, 60 (02) : 213 - 214
  • [28] EVALUATION AND DEVELOPMENT OF THE KTH TEXT-TO-SPEECH SYSTEM ON THE SEGMENTAL LEVEL
    CARLSON, R
    GRANSTROM, B
    NORD, L
    SPEECH COMMUNICATION, 1990, 9 (04) : 271 - 277
  • [29] Quality Evaluation of an Anonymized Dataset
    Fletcher, Sam
    Islam, Md Zahidul
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 3594 - 3599
  • [30] Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols
    Campagne, Fabien
    BMC BIOINFORMATICS, 2008, 9 (1)