If you have any questions, feel free to pose an issue or send an email to [email protected]. We are always happy to receive feedback!
The code for VDI is developed based on CIDA. CIDA also provides many baseline implementations (e.g., DANN, MDD), which we used for performance comparasion in our paper. Please refer to its code for details.
In order to eliminate the influence of imbalanced labels, we ensure that each domain shares similar label distributions by picking a subset of CompCars. To accelerate training, we derive a 4096-dim feature vector from each input image by Resnet18, and then apply VDI on the feature vectors. These feature vectors are included in "data" folder.
python main.py -c config_CompCars (or)
python main.py --config config_config_CompCars
- Download the weight from here and unzip under the folder "pretrained_weight".
- Run the following code:
python inference.py -c config_CompCars_inference (or)
python inference.py --config config_CompCars_inference
Both training/inference will produce the result in "result_save" folder, and you can use the result to do visualization.
- Train/inference the VDI on CompCars dataset.
- Check your result in "result_save" folder, and then change the first 2 lines in "visualize_circle_indices.py":
dates = "2023-03-11" # filling your own dates for experiments
time = ["21","59","11"] # filling the time for experiments. format: hour, miniute, second
- Run the following code:
python visualize_compcars_indices.py
Your plot should be in the folder that saves the results of your CompCars experiment ("result_save/dates-time/visualization"). It will look similar as follows:
We use visdom to visualize. We assume the code is run on a remote gpu machine.
Find the config in "config" folder. Choose the config you need and Set "opt.use_visdom" to "True".
python -m visdom.server -p 2000
Now connect your computer with the gpu server and forward the port 2000 to your local computer. You can now go to: http://localhost:2000 (Your local address) to see the visualization during training.