This repo contains the code and data used for our work on Gaze-based annotation of histopathology images. Table of contents is given below
- Requirements
- Setting up Python Environment
- Dataset
- Gaze
- Hand
- Masks
- Raw Gaze Data
- Models
- Training
- Evaluation
- Reference
- Tensorflow 1.15
- Python 3.7
- Anaconda
- Qupath (for viewing Whole Slide Image (WSI) file)
- MATLAB (masks to bounding box conversion)
- Google Colab or Jupyter Notebook
- Create a conda environment and install Tensorflow:
conda create -n tensorflow1.15 python=3.7
conda activate tensorflow1.15
conda install cudatoolkit=10.0
conda install cudnn=7.6.5
pip install tensorflow-gpu==1.15
- Install dependencies
pip install numpy==1.19.5 lxml pillow matplotlib jupyter contextlib2 cython tf_slim pycocotools
(For windows change pycocotools
to pycocotools-windows
)
- Install the TensorFlow Object Detection API .
Follow the steps in \Generate_Annotations
to create your own dataset from gaze and hand annotations.
Or simply download our dataset from the links below.
- Gaze: Images used for training and testing of gaze-based object detectors can be downloaded from here. The labels corresponding to each file in the training and test dataset can be found in "Gaze_Data/labels/train" and "Gaze_Data/labels/test" respectively.
- Hand: Images used for training and testing of object detectors on hand-labelled data can be downloaded from here. The labels corresponding to each file in the training and test dataset can be found in "Hand_Data/labels/train" and "Hand_Data/labels/test" respectively. NOTE:
Hand generated labels were used for performance evaluation of both gaze-based and hand-labelled object detectors. Therefore, the contents of both the "Gaze_Data/labels/test" and the "Hand_Data/labels/test" folders are identical.
- Masks: The binary masks used for generating labels for hand and gaze-based object detectors can be downloaded from here
- Raw Gaze Data: The raw gaze data is available here. This data needs to be converted into binary masks before it can be used to train an object detector.
- Every eye gaze data collection session (lasted about 5-10 minutes on average) is contained in a folder named according to the date and time.
- "all_sessions_merged" folder contains all levels from every gaze data collection session merged together.
- "all_annotations_on_max_level" contains one csv file of the maximum resolution containing all gaze data points from all levels scaled to the highest resolution of the .svs image.
- ".svs" files are the corresponding WSI file.
- Our pre-trained Faster RCNN models are available here.
- Raw Faster RCNN Inception V2 weights
- YOLOv3 and YOLOv5 models were trained using PyTorch and can be downloaded from here
- Download raw Faster RCNN Inception V2 Weights from the above link and into the
/training
directory. - From utils use the appropriate
generate_tfrecord
script to generate 'train.record' and 'test.record' from your train and test sets. - Paste 'train.record' and 'test.record' into the
/training
directory. - Set the appropriate paths in
training_pipeline.config
- Now initiate a training job by opening a new Terminal, cd inside the
models/research/object_detection
and run the following command:
python model_main.py --logtostderr --model_dir=PATH_TO_BE_CONFIGURED\models\research\object_detection\training\faster_rcnn_inception_v2_coco_2018_01_28 --pipeline_config_path=PATH_TO_BE_CONFIGURED\models\research\object_detection\training\training_pipeline.config
- After training has completed, export the inference graph using
export_inference_graph.py
in themodels\research\object_detection
directory
The Faster RCNN model was implemented in tensorflow. Evaluation.ipynb
can be used to test the Faster RCNN model on the test data.
- Download and extract our Gaze-based and Hand-Annotated trained models. If you prefer to train your own model then you can do so by following the instructions provided in the Training section above.
- Paste
Evaluation.ipynb
and/images
intomodels/research/object_detection/
- Open terminal and cd
models/research/object_detection/
- Run
Evaluation.ipynb
notebook via Jupyter Notebook
The YOLO models were implemented in PyTorch and can be found in this repo.
Results for all detectors can be found here. Corresponding metrics are generated by this repo using the detection results in /Text_Format
directory.
This repo was used to generate the results for the following paper on Gaze-based labelling of Pathology data.
Komal Mariam, Osama Mohammed Afzal, Wajahat Hussain, Muhammad Umar Javed, Amber Kiyani, Nasir Rajpoot, Syed Ali Khurram and Hassan Aqeel Khan, "On Smart Gaze based Annotation of Histopathology Images for Training of Deep Convolutional Neural Networks", submitted to IEEE Journal of Biomedical and Health Informatics.
BibTex Reference: Available after acceptance.