You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
https://github.com/khalooei/ALOCC-CVPR2018 is along the same lines of your paper. In that paper, authors have reported state of the art results on caltech-256 dataset. But I failed to reproduce their results. I had tried various settings, architectures, hyperparameters. But accuracy never goes beyond 56% in authentic/fake classification. The authors of paper have not yet pushed their code. They seem to be busy and therefore did not divulge any specific details as well. I wanted to ask if you have tried your algorithm on caltech-256 dataset. My concern is that without any pretrained network, how can a GAN trained from scratch can reach accuracy of 90%. Did you ever try that?
The text was updated successfully, but these errors were encountered:
Results for that paper correspond to the protocol for which data is not split into separate training, validation and testing sets, meaning that the same inliers are used for testing, which were used during training the network. In all experiments that I've conducted, I did not use the same inliers for testing and training (it is written in the experimental section).
I tried to train an autoencoder with Caltech-256, but it was obvious that it is not going to work well. Too few examples per category. I did try it with COIL-100 which has even fewer examples per category, but it more or less worked out because all images of the same category are very similar. In Caltech-256 images are very diverse and cropped not tightly, which makes it very hard to train autoencoder well.
https://github.com/khalooei/ALOCC-CVPR2018 is along the same lines of your paper. In that paper, authors have reported state of the art results on caltech-256 dataset. But I failed to reproduce their results. I had tried various settings, architectures, hyperparameters. But accuracy never goes beyond 56% in authentic/fake classification. The authors of paper have not yet pushed their code. They seem to be busy and therefore did not divulge any specific details as well. I wanted to ask if you have tried your algorithm on caltech-256 dataset. My concern is that without any pretrained network, how can a GAN trained from scratch can reach accuracy of 90%. Did you ever try that?
The text was updated successfully, but these errors were encountered: