Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry Regarding Runtime Discrepancies in Splatam #117

Open
LeeBY68 opened this issue Jun 8, 2024 · 4 comments
Open

Inquiry Regarding Runtime Discrepancies in Splatam #117

LeeBY68 opened this issue Jun 8, 2024 · 4 comments
Labels
question Further information is requested

Comments

@LeeBY68
Copy link

LeeBY68 commented Jun 8, 2024

First of all, thank you for open-sourcing your excellent work, Splatam. I have a question regarding the runtime mentioned in your paper, specifically Table 6.
I noticed that the reported runtimes in your paper (Nvidia RTX 3080 Ti) are:
• 25ms (Tracking/Iteration)
• 24ms (Mapping/iteration)
• 1.00s (Tracking/Frame)
• 1.44s (Mapping/Frame)

However, the runtimes I am experiencing on my server (Nvidia RTX 4090) are:
• 44.99ms (Tracking/Iteration)
• 51.01ms (Mapping/iteration)
• 1.80s (Tracking/Frame)
• 3.06s (Mapping/Frame)

This results is based on changed settings to save time:

  1. in config (configs/replica/splatam.py):
    use_wandb=False,

  2. in main func:
    I commented out the code containing report_progress

But with the powerful serve (4090 > 3080 ti) and changed settings, the time cost is still higher than the paper.
Could you please advise if there are any changes needed or if you have any suggestions to help me replicate the runtimes reported in your paper?

Thank you very much!

@Nik-V9
Copy link
Contributor

Nik-V9 commented Jun 8, 2024

Hi, Thanks for trying out our code! The numbers seem to be inconsistent with our testing on a local machine containing a 4090 GPU.

One reason that comes to mind is the disk read and write speed or the CPU to GPU transfer overhead. It likely has something to do with io speed on your server.

I would also recommend taking a look at this comment and screenshot from our 3080 Ti machine: #32 (comment)

@Nik-V9 Nik-V9 added the question Further information is requested label Jun 8, 2024
@LeeBY68
Copy link
Author

LeeBY68 commented Jun 9, 2024

Hi @Nik-V9,

1. Following is my experiment output using an RTX 4090, with the original code and the updated configuration configs/replica/splatam.py:
use_wandb=False
screen_splatam_original
The running time results after completing the entire replica room0 sequence are as follows:

Average Tracking/Iteration Time: 44.273851322137816 ms
Average Tracking/Frame Time: 1.7709595675468446 s
Average Mapping/Iteration Time: 50.07383607228597 ms
Average Mapping/Frame Time: 3.004946891307831 s

From the screenshot, it appears to be the same as your screenshot. However, the overall time is about double your reported time.

2. I also tested my server speed:
sudo hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   40616 MB in  1.99 seconds = 20395.08 MB/sec
 Timing buffered disk reads: 1346 MB in  3.00 seconds = 448.55 MB/sec

dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.96667 s, 546 MB/s

3. Do you have any suggestions on addressing this time usage issue? It seems my server is functioning properly, but the time usage is double the results reported in the paper.

@LeeBY68
Copy link
Author

LeeBY68 commented Jun 9, 2024

Another question I have is about the image size of the original replica dataset sequences. The replica images are 1200x680 pixels.
But the paper (section Runtime Comparison.) states that: Each iteration of your approach renders a full 1200x980 pixel image (approximately 1.2 million pixels) to apply the loss for both tracking and mapping.
Is there any processing done on the replica images?

@Nik-V9
Copy link
Contributor

Nik-V9 commented Jun 19, 2024

Thanks for testing the server io. I'll check the overall time on my end using a RTX 4090. Not sure why there is a 2x discrepancy.

Thanks for flagging the typo in the paper. I'll update the paper when I get a chance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants