An overview of multi-modal techniques for the characterization of sport programmes

被引:0
|
作者
Adami, N [1 ]
Leonardi, R [1 ]
Migliorati, P [1 ]
机构
[1] Univ Brescia, DEA, Brescia, Italy
关键词
sports video content characterization; semantic indexing; multi-modal analysis; audio-visual features;
D O I
10.1117/12.510136
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
The problem of content characterization of sports videos is of great interest because sports video appeals to large audiences and its efficient distribution over various networks should contribute to widespread usage of multimedia services. In this paper we analyze several techniques proposed in literature for content characterization of sports videos. We focus this analysis on the typology of the signal (audio, video, text captions,...) from which the low-level features are extracted. First we consider the techniques based on visual information, then the methods based on audio information, and finally the algorithms based on audio-visual cues, used in a multi-modal fashion. This analysis shows that each type of signal carries some peculiar information, and the multi-modal approach can fully exploit the multimedia information associated to the sports video. Moreover, we observe that the characterization is performed either considering what happens in a specific time segment, observing therefore the features in a "static" way, or trying to capture their "dynamic" evolution in time. The effectiveness of each approach depends mainly on the kind of sports it relates to, and the type of highlights we are focusing on.
引用
收藏
页码:1296 / 1306
页数:11
相关论文
共 50 条
  • [41] Accelerating Battery Characterization Using Neutron and Synchrotron Techniques: Toward a Multi-Modal and Multi-Scale Standardized Experimental Workflow
    Atkins, Duncan
    Capria, Ennio
    Edstrom, Kristina
    Famprikis, Theodosios
    Grimaud, Alexis
    Jacquet, Quentin
    Johnson, Mark
    Matic, Aleksandar
    Norby, Poul
    Reichert, Harald
    Rueff, Jean-Pascal
    Villevieille, Claire
    Wagemaker, Marnix
    Lyonnard, Sandrine
    [J]. ADVANCED ENERGY MATERIALS, 2022, 12 (17)
  • [42] Overview of the NLPCC 2022 Shared Task: Multi-modal Dialogue Understanding and Generation
    Wang, Yuxuan
    Zhao, Xueliang
    Zhao, Dongyan
    [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 328 - 335
  • [43] Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-Modal Fake News Detection
    Chen, Jinyin
    Jia, Chengyu
    Zheng, Haibin
    Chen, Ruoxi
    Fu, Chenbo
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (06): : 3144 - 3158
  • [44] Multi-Modal Pedestrian Detection with Large Misalignment Based on Modal-Wise Regression and Multi-Modal IoU
    Wanchaitanawong, Napat
    Tanaka, Masayuki
    Shibata, Takashi
    Okutomi, Masatoshi
    [J]. PROCEEDINGS OF 17TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA 2021), 2021,
  • [45] Multi-modal traffic in TRANSIMS
    Nagel, K
    [J]. PEDESTRIAN AND EVACUATION DYNAMICS, 2002, : 161 - 172
  • [46] A MULTI-MODAL VIEW OF MEMORY
    HERRMANN, DJ
    SEARLEMAN, A
    [J]. BULLETIN OF THE PSYCHONOMIC SOCIETY, 1988, 26 (06) : 503 - 503
  • [47] Intelligent multi-modal systems
    Tsui, KC
    Azvine, B
    Djian, D
    Voudouris, C
    Xu, LQ
    [J]. BT TECHNOLOGY JOURNAL, 1998, 16 (03): : 134 - 144
  • [48] Multi-modal Video Summarization
    Huang, Jia-Hong
    [J]. ICMR 2024 - Proceedings of the 2024 International Conference on Multimedia Retrieval, 2024, : 1214 - 1218
  • [49] Interactive multi-modal suturing
    Payandeh, Shahram
    Shi, Fuhan
    [J]. VIRTUAL REALITY, 2010, 14 (04) : 241 - 253
  • [50] Multi-modal of object trajectories
    Partsinevelos, P.
    [J]. JOURNAL OF SPATIAL SCIENCE, 2008, 53 (01) : 17 - 30