This repository is for the submited paper:
Alia, Ahmed, et al. "A Novel Voronoi-based Convolutional Neural Network Framework
for Pushing Person Detection in Crowd Videos". 2023
- Goal
- Motivation
- Architicture of VCNN4PuDe
- Codes of VCNN4PuDe
- Samples
- Framework Installing
- Framework Running
- Codes for CNN Architectures and Training
- Trained CNN models
- Test Sets
- Codes for Trained CNN Models Evaluation
Codes of VCNN4PuDe
Input video with its trajectory data
You can access them by this link.
Note: They were taken from Pedestrian Dynamics Data Archive hosted by FZJ.
Annotated Video produced by VCNN4PuDe Framework
-
Create a directory named VCNN4PuDe on your drive.
-
Access VCNN4PuDe directory.
-
Add new notebook and run the follwing commands
a. Mount Google Drive
from google.colab import drive drive.mount('/content/gdrive')
b. Access VCNN4PuDe directory Folder
%cd /content/drive/My Drive/VCNN4PuDe/
c. Clone VCNN4PuDe Framework
git clone https://github.com/abualia4/VCNN4PuDe.git
d. Install keras-preprocessing module
!pip install keras-preprocessing
Open the run notebook and follow the instructions in the notebook, and the annotated Video.mp4 will be stored in the annotated folder.
Note: If some libraries are required for running the framework, use the following command to install it
!pip install module/library name
- Thanks to the authors of voronoi_finite_polygons_2d function.
- Thanks to the author of Create_random_polygon class.
Soon