Skip to content

jlianglab/StepwisePretraining

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

79 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining.

This repository provides a Keras implementation of the Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining

We extend and reimplement five prominent self-supervised learning methods and integrate them into a united framework that incorporates three learning components: Discriminative, Restorative, and Adversarial Learning. We then explore the stepwise training strategies that stabilize the united framework's training process and improve the overall performance for the target tasks.

The five self-supervised learning methods are Jigsaw, Rubik's Cube & Rubik's Cube++,Rotation, Deep CLustering, TransVW.

Publication

Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining
Zuwei Guo1, Nahid Ul Islam1, Michael B. Gotway2, Jianming Liang1
1 Arizona State University, 2 Mayo Clinic
Published in: Domain Adaptation and Representation Transfer (DART), 2022.

Paper | Code | [Poster] | [Slides] | Presentation ([YouTube])

Major results from our work

  1. We found the optimum pretraining strategy for the United framework Stepwise pretraining is always better than training everything together
  1. We found the effective utilization for pretrained components for target tasks. a) For classification tasks, pretrained encoders perform much better than the randomly initialized encoders

b) For segmentation tasks, the trained encoder also improves performance for most methods. But we also observed some negative transfer due to task mismatches.

c) For segmentation tasks, we should always transfer pretrained decoders.

d) Adversarial training ((D)+R)+A strengthens learned representation

e) Adversarial training ((D)+R)+A reduces annotation costs

Pre-trained models

You can download the pretrained models used/developed in our paper as follows:

< <\tr> < <\tr> < <\tr> < <\tr> < <\tr>
Method Pretraining Strategy model
Jigsaw (D+R) download
((D)+R) download
(D+R+A)download
(((D)+R)+A) download
Rotation (D+R) download
((D)+R) download
(D+R+A)download
(((D)+R)+A) download
Rubik's Cube (D+R) download
((D)+R) download
(D+R+A)download
(((D)+R)+A) download
Deep Clustering (D+R) download
((D)+R) download
(D+R+A)download
(((D)+R)+A) download
TransVW (D+R) download
((D)+R) download
(D+R+A)download
(((D)+R)+A) download

Acknowledgement

This research has been supported in part by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and in part by the NIH under Award Number R01HL128785. The content is solely the responsi- bility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided in part by the ASU Research Computing and in part by the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF) under grant numbers: ACI-1548562, ACI-1928147, and ACI-2005632. The content of this paper is covered by patents pending.

License

Released under the ASU GitHub Project License.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages