Evaluating the equity implications of ridehailing through a multi-modal accessibility framework

被引:11
|
作者
Abdelwahab, Bilal [1 ]
Palm, Matthew [2 ]
Shalaby, Amer [1 ]
Farber, Steven [3 ]
机构
[1] Univ Toronto, Civil & Mineral Engn, 35 St George St, Toronto, ON M5S 1A4, Canada
[2] Worcester State Univ, Dept Urban Studies, 486 Chandler St, Worcester, MA 01602 USA
[3] Univ Toronto Scarborough, Human Geog, 1265 Mil Trail, Toronto, ON M1C 1A4, Canada
关键词
Accessibility; Ridehailing; Equity analysis; First; last mile; Transportation equity; TRANSIT SERVICE; TRAVEL; UBER;
D O I
10.1016/j.jtrangeo.2021.103147
中图分类号
F [经济];
学科分类号
02 ;
摘要
In rapidly-growing metropolitan regions, it is crucial that transportation-related policies and infrastructure are designed to ensure that everyone can participate equitably in economic, social, and civil opportunities. Ridehailing services are touted to improve mobility options, but there is scant research that incorporates this mode within an accessibility framework. This paper employs a generalized cost measure in a multi-modal accessibility framework, namely Access Profile Analysis, to assess the role of ridehailing in providing job access to historically under-resourced parts of Toronto, Canada, referred to by the city as Neighborhood Improvement Areas (NIAs). Ridehailing is analyzed both as a mode of commute and as a feeder to the transit network (a first-mile solution). The results indicate that there are two main determinants of the extent to which ridehailing provides additional accessibility over transit: the transit level of service at the origin zone and the zone's proximity to employment opportunities. The ridehailing mode is shown to increase accessibility especially to closer destinations (jobs), with the highest improvement seen in the city's inner suburbs. On the other hand, integrating ridehailing with public transit does little to improve access to jobs. Compared to the rest of the city, NIAs experience a higher accessibility improvement from ridehailing alone, but not from its integration with transit. Nonetheless, job accessibility remains lower in NIAs than in other areas - even after the introduction of ridehailing.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Group interaction through a multi-modal haptic framework
    Le, Huang H.
    Loomes, Martin J.
    Loureiro, Rui C. V.
    12TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS - IE 2016, 2016, : 62 - 67
  • [2] SHRIMPS: A framework for evaluating multi-user, multi-modal implicit authentication systems
    Chen, Jiayi
    Hengartner, Urs
    Khan, Hassan
    COMPUTERS & SECURITY, 2024, 137
  • [3] MultiJAF: Multi-modal joint entity alignment framework for multi-modal knowledge graph
    Cheng, Bo
    Zhu, Jia
    Guo, Meimei
    NEUROCOMPUTING, 2022, 500 : 581 - 591
  • [4] Introduction to a Framework for Multi-modal and Tangible Interaction
    Lo, Kenneth W. K.
    Tang, Will W. W.
    Ngai, Grace
    Chan, Stephen C. F.
    Tse, Jason T. P.
    IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [5] Exploiting Multi-Modal Interactions: A Unified Framework
    Li, Ming
    Xue, Xiao-Bing
    Zhou, Zhi-Hua
    21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1120 - 1125
  • [6] A Multi-Modal Framework for Future Emergency Systems
    Basil, Ahmed Osama
    Mu, Mu
    Agyeman, Michael Opoku
    2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI 2019), 2019, : 17 - 20
  • [7] Customizable Multi-Modal Mixed Reality Framework
    Omary, Danah
    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW 2024, 2024, : 1140 - 1141
  • [8] A unified framework for multi-modal federated learning
    Xiong, Baochen
    Yang, Xiaoshan
    Qi, Fan
    Xu, Changsheng
    NEUROCOMPUTING, 2022, 480 : 110 - 118
  • [9] FARMI: A FrAmework for Recording Multi-Modal Interactions
    Jonell, Patrik
    Bystedt, Mattias
    Fallgren, Per
    Kontogiorgos, Dimosthenis
    Lopes, Jose
    Malisz, Zofia
    Mascarenhas, Samuel
    Oertel, Catharine
    Raveh, Eran
    Shore, Todd
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3969 - 3974
  • [10] Multi-Modal Interactions of Mixed Reality Framework
    Omary, Danah
    Mehta, Gayatri
    17TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE, DCAS 2024, 2024,