Combining Temporal and Multi-Modal Approaches to Better Measure Accessibility to Banking Services

被引:2
|
作者
Langford, Mitchel [1 ]
Price, Andrew [1 ]
Higgs, Gary [1 ]
机构
[1] Univ South Wales, Fac Comp Engn & Sci, Wales Inst Social & Econ Res & Data WISERD, GIS Res Ctr, Pontypridd CF37 1DL, M Glam, Wales
基金
英国经济与社会研究理事会;
关键词
reconfiguration of banking services; multi-modal accessibility; floating catchment area models; impacts of closures; spatial patterns of access; MEASURING SPATIAL ACCESSIBILITY; HEALTH-CARE SERVICES; SPACE-TIME; PUBLIC-TRANSIT; FINANCIAL EXCLUSION; WALKING DISTANCE; OPENING HOURS; SPATIOTEMPORAL ACCESSIBILITY; INDIVIDUAL ACCESSIBILITY; BRANCH CLOSURES;
D O I
10.3390/ijgi11060350
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The UK, as elsewhere, has seen an accelerating trend of bank branch closures and reduced opening hours since the early 2000s. The reasons given by the banks are well rehearsed, but the impact assessments they provide to justify such programs and signpost alternatives have been widely criticized as being inadequate. This is particularly so for vulnerable customers dependent on financial services who may face difficulties in accessing remaining branches. There is a need whilst analyzing spatial patterns of access to also include temporal availability in relation to transport opportunities. Drawing on a case study of potential multi-modal accessibility to banks in Wales, we demonstrate how open-source tools can be used to examine patterns of access whilst considering the business operating hours of branches in relation to public transport schedules. The inclusion of public and private travel modes provides insights into access that are often overlooked by a consideration of service-side measures alone. Furthermore, findings from the types of tools developed in this study are illustrative of the additional information that could be included in holistic impact assessments, allowing the consequences of decisions being taken to close or reduce the operating hours of bank branches to be more clearly communicated to customers.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Combining Multi-Modal Statistics for Welfare Prediction Using Deep Learning
    Sharma, Pulkit
    Manandhar, Achut
    Thomson, Patrick
    Katuva, Jacob
    Hope, Robert
    Clifton, David A.
    SUSTAINABILITY, 2019, 11 (22)
  • [32] Development of Multi-Modal Surface Research Equipment by Combining TREXS with IRRAS
    Abe, Hitoshi
    Niwa, Yasuhiro
    Kimura, Masao
    13TH INTERNATIONAL CONFERENCE ON SYNCHROTRON RADIATION INSTRUMENTATION (SRI2018), 2019, 2054
  • [33] On Enhancing Usability of Hindi ATM Banking with Multi-Modal UI and Explainable UX
    Dept. of CSE, Graphic Era University, Dehradun, India
    不详
    不详
    World Conf. Commun. Comput., WCONF,
  • [34] What Makes Multi-modal Learning Better than Single (Provably)
    Huang, Yu
    Du, Chenzhuang
    Xue, Zihui
    Chen, Xuanyao
    Zhao, Hang
    Huang, Longbo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [35] How multi-modal approaches support engineering and computing education research
    Villanueva Alarcón I.
    Anwar S.
    Atiq Z.
    Australasian Journal of Engineering Education, 2023, 28 (02) : 124 - 139
  • [36] A cloud-based middleware for multi-modal interaction services and applications
    Avenoglu, Bilgin
    Koeman, Vincent J.
    Hindriks, Koen V.
    JOURNAL OF AMBIENT INTELLIGENCE AND SMART ENVIRONMENTS, 2022, 14 (06) : 455 - 481
  • [37] Choice and equity: A critical analysis of multi-modal public transport services
    Chan, Ho-Yin
    Xu, Yingying
    Chen, Anthony
    Zhou, Jiangping
    TRANSPORT POLICY, 2023, 140 : 114 - 127
  • [38] Modeling competitive multi-modal transit services: a nested logit approach
    Lo, HK
    Yip, CW
    Wan, QK
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2004, 12 (3-4) : 251 - 272
  • [39] TMac: Temporal Multi-Modal Graph Learning for Acoustic Event Classification
    Liu, Meng
    Liang, Ke
    Hu, Dayu
    Yu, Hao
    Liu, Yue
    Meng, Lingyuan
    Tu, Wenxuan
    Zhou, Sihang
    Liu, Xinwang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3365 - 3374
  • [40] Multi-Modal Temporal Convolutional Network for Anticipating Actions in Egocentric Videos
    Zatsarynna, Olga
    Abu Farha, Yazan
    Gall, Juergen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2249 - 2258