Exploring emergent soundscape profiles from crowdsourced audio data

被引:1
|
作者
Kaarivuo, Aura [1 ,2 ]
Oppenlander, Jonas [3 ]
Karkkainen, Tommi [1 ]
Mikkonen, Tommi [1 ]
机构
[1] Univ Jyvaskyla, Fac Informat Technol, POB 35 Agora, FIN-40014 Jyvaskyla, Finland
[2] Metropol Univ Appl Sci, Sch Media Design & Conservat, POB 4072, FIN-00079 Metropolia, Finland
[3] Elisa, Ratavartijankatu 5, FIN-00520 Helsinki, Finland
关键词
Soundscapes; Mobile crowdsensing; Machine learning; Emotional information; Perception; Urban planning; URBAN SOUNDSCAPES; SELECTION; QUALITY;
D O I
10.1016/j.compenvurbsys.2024.102112
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The key component of designing sustainable, enriching, and inclusive cities is public participation. The soundscape is an integral part of an immersive environment in cities, and it should be considered as a resource that creates the acoustic image for an urban environment. For urban planning professionals, this requires an understanding of the constituents of citizens' emergent soundscape experience. The goal of this study is to present a systematic method for analyzing crowdsensed soundscape data with unsupervised machine learning methods. This study applies a crowdsensed sound- scape experience data collection method with low threshold for participation. The aim is to analyze the data using unsupervised machine learning methods to give insights into soundscape perception and quality. For this purpose, qualitative and raw audio data were collected from 111 participants in Helsinki, Finland, and then clustered and further analyzed. We conclude that a machine learning analysis combined with accessible, mobile crowdsensing methods enable results that can be applied to track hidden experiential phenomena in the urban soundscape.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Analysis of spatial variation with app-based crowdsourced audio data
    Kolly, Marie-Jose
    Leemann, Adrian
    Matter, Florian
    [J]. 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 1710 - 1714
  • [2] Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data
    Brown, Chloe
    Chauhan, Jagmohan
    Grammenos, Andreas
    Han, Jing
    Hasthanasombat, Apinan
    Spathis, Dimitris
    Xia, Tong
    Cicuta, Pietro
    Mascolo, Cecilia
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3474 - 3484
  • [3] Exploring informants' perspectives on the role of crowdsourced active travel data
    Alattar, Mohammad Anwar
    Cottrill, Caitlin
    Beecroft, Mark
    [J]. TRANSPORTATION PLANNING AND TECHNOLOGY, 2022, 45 (03) : 226 - 250
  • [4] EXPLORING AUTOMATIC COVID-19 DIAGNOSIS VIA VOICE AND SYMPTOMS FROM CROWDSOURCED DATA
    Han, Jing
    Brown, Chloe
    Chauhan, Jagmohan
    Grammenos, Andreas
    Hasthanasombat, Apinan
    Spathis, Dimitris
    Xia, Tong
    Cicuta, Pietro
    Mascolo, Cecilia
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 8328 - 8332
  • [5] Maximizing benefits from crowdsourced data
    Geoffrey Barbier
    Reza Zafarani
    Huiji Gao
    Gabriel Fung
    Huan Liu
    [J]. Computational and Mathematical Organization Theory, 2012, 18 : 257 - 279
  • [6] Maximizing benefits from crowdsourced data
    Barbier, Geoffrey
    Zafarani, Reza
    Gao, Huiji
    Fung, Gabriel
    Liu, Huan
    [J]. COMPUTATIONAL AND MATHEMATICAL ORGANIZATION THEORY, 2012, 18 (03) : 257 - 279
  • [7] Geographic Summaries from Crowdsourced Data
    Rizzo, Giuseppe
    Falcone, Giacomo
    Meo, Rosa
    Pensa, Ruggero G.
    Troncy, Raphael
    Milicic, Vuk
    [J]. SEMANTIC WEB: ESWC 2014 SATELLITE EVENTS, 2014, 8798 : 477 - 482
  • [8] Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition
    Dairazalia Sanchez-Cortes
    Oya Aran
    Dinesh Babu Jayagopi
    Marianne Schmid Mast
    Daniel Gatica-Perez
    [J]. Journal on Multimodal User Interfaces, 2013, 7 : 39 - 53
  • [9] Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition
    Sanchez-Cortes, Dairazalia
    Aran, Oya
    Jayagopi, Dinesh Babu
    Mast, Marianne Schmid
    Gatica-Perez, Daniel
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2013, 7 (1-2) : 39 - 53
  • [10] Learning to Predict from Crowdsourced Data
    Bi, Wei
    Wang, Liwei
    Kwok, James T.
    Tu, Zhuowen
    [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2014, : 82 - 91