Project description:
This project is funded by the Volkswagen Foundation and focuses on the question how the streetscape and potential interventions in it influence physical distancing compliance. The project team will develop an open source agent-based model of pedestrian physical distancing behavior, calibrate it with user experiments in an immersive virtual environment, and run it for various city streetscapes. The model may serve as a tool for planning and ex-ante evaluation of policy interventions in the streetscape targeting to minimize the spread of COVID-19 and other infectious diseases. The top half of the figure shows examples for streetscape interventions designed to prevent the spread of the COVID-19 disease in Utrecht, The Netherlands: a) pedestrian one-way street, b) sign asking to keep right, c) pedestrian roundabout, including examples of non-compliance. (image © Judith Verstegen). The bottom half depicts an overview of the study set-up consisting of an Agent-Based Model (ABM) of pedestrian behavior and an Immersive Video Environment (IVE).
Published articles:
- Stenkamp, J., Karic, B., Scharf, P., Verstegen, J.A. and Kray, C., 2023. Using an Immersive Video Environment to Assess Pedestrians’ Compliance With COVID Distance Keeping Interventions. Interacting with Computers, iwad021, https://doi.org/10.1093/iwc/iwad021
- Schröder, S., Stenkamp, J., Brüggemann, M., Karic, B., Verstegen, J.A. and Kray, C., 2023. Towards dynamically generating immersive video scenes for studying human-environment interactions. AGILE: GIScience Series, 4, p.40, https://doi.org/10.5194/agile-giss-4-40-2023
The first article includes the results of the user study, conducted at the Institute for Geoinformatics in Münster in February 2022. The study was conducted using an Immersive Video Environment (IVE) (compare https://github.com/sitcomlab/IVE). The following image shows an exemplary image of footage shown as a video in the IVE and thus on 180-degree screens within the study.
The agend-based model mentioned in the project description is still under devolpement and the code and a respective paper will be published soon.
The second published article mentioned above describes the so called Dynamic Scene Generator, which is still under development. It should serve as a tool to dynamically create several scenes that can be shown within the IVE.