Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train and Test #8

Open
asher-bit opened this issue Sep 26, 2023 · 10 comments
Open

Train and Test #8

asher-bit opened this issue Sep 26, 2023 · 10 comments

Comments

@asher-bit
Copy link

Hello, I try to train and test your work with what you say in readme.md.
But i have a question that why train and test are the same dataset(chari.txt)
And Why there is no dataset_path in compress progress.

@asher-bit
Copy link
Author

image
image
image
The three images are training, encoding and decoding.

@daniel03c1
Copy link
Owner

First of all, sorry for the late response.

The reason for using the same dataset for training and testing is that NeRF requires each network for each dataset.
If you are talking about the train and test split, the training code automatically splits images into separate sets.

The reason why a datapath is not required in the compression process is that our compressing process does not require any tuning during the compression process.

@asher-bit
Copy link
Author

Hello, I have another question for the paper.
After using tensoRF decomposition to get three sets of planes and vectors, why use idwt instead of dwt.
In addition, where is the wavelet coefficient in Figure 2 obtained in the paper?
Looking forward to your response.

@asher-bit asher-bit reopened this Oct 17, 2023
@daniel03c1
Copy link
Owner

The reason for using idwt is that we view planes as wavelet coefficients.
Using iDWT, we can project these wavelet coefficients into the spatial domain for further rendering.
Therefore, the planes used for training are exactly wavelet coefficients.
I hope I have addressed your concerns well.

@asher-bit
Copy link
Author

Can I understand that by adding an inverse wavelet transform, the coefficients obtained when decomposing the three-dimensional mesh into planes can be more sparse (closer to the coefficients obtained with the wavelet transform)

@daniel03c1
Copy link
Owner

If wavelet coefficients were to be used, they must be inverse transformed before rendering. That is why we used iDWT, so they are not close to wavelet coefficients, but they are the wavelet coefficients.

@asher-bit
Copy link
Author

OkI think i know your means.
I try to not use dwt when train with "python3 train.py --config=configs/chair.txt --use_mask --mask_weight=1e-10 --grid_bit=8" and use the same compress and decompress process. But i get PSNR with 14.xxx. Is my command wrong or is there something else?

@daniel03c1
Copy link
Owner

Could you please check if you used the latest version by any chance?

@asher-bit
Copy link
Author

image
I use this branch for test.

@asher-bit
Copy link
Author

asher-bit commented Oct 25, 2023

I solved this problem by commenting out the code using inverse in compress. Thank you for your reply and your great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants