-
Notifications
You must be signed in to change notification settings - Fork 632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First-order subpixel smoothing for density-based TO #2741
Conversation
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #2741 +/- ##
==========================================
- Coverage 74.06% 73.50% -0.56%
==========================================
Files 18 18
Lines 5395 5549 +154
==========================================
+ Hits 3996 4079 +83
- Misses 1399 1470 +71
|
I suppose a test for this feature would be based on @stevengj's proposal in #1854 (comment) but this time demonstrating first-order convergence. |
I don't think we really need to test the first-order convergence, since that's not really the feature here. Rather we should probably check the accuracy of the gradient for a few finite betas, and when The real novelty with this PR is the ability to differentiate with any beta. So that's probably the test we should implement. |
What's the source of this error? Why does the gradient seem to be blowing up as β increases? Isn't the point of smoothing to get a finite nonzero gradient? |
Right, exactly. I would have expected it to be monotonic too, but maybe there's no reason for it to be monotonic (but it should smoothly converge at least to something finite).
So during the backward pass, autograd's numpy throws some overflow warnings for a few different functions. But again, the tooling and logging capabilities for autograd are really nonexistent, so debugging will be pretty manual. But if I had to guess, here are some areas where things could blow up:
(In the docstring, I have an example which can be used to recreate the gradient norm plot above -- just sweep beta and use the autograd |
Alrighty, I think I resolved all the above issues. Just had to implement a few "double where" tricks to sanitize the backprop. (also had a bug in the effective material part of the algorithm). The smoothed materials themselves look much better: And, if we do a convergence check as β→∞, we see that the norm of the gradient converges smoothly to a finite, non-zero value: I've also checked the pathological case where we have a uniform design field (such that the spatial gradient is zero). Normally, this would create a divide by zero error when computing the distance to the interface, and further complicate the backward pass. But I've sanitized things enough that everything looks good. This should be a drop-in replacement for the Any ideas for a test? |
Here we implement first-order subpixel smoothing for density-based TO. This approach allows us to treat the density formulation as a level set, such that the user can now continuously increase β→∞.
This approach is 100% in python and leverages
autograd
to do all of the backpropagation. It's very straightforward, but currently only works for 2D degrees of freedom. Adding capability for 1D and 3D is trivial -- we just need to ensure the filters work in those dimension, and we need to add in the right fill factor kernel (analytically derived by assuming the smoothing kernel is a sphere). This is a "simple version" of what's implemented in #1951Here's an example:
While the approach works well, it is still sensitive to numerical roundoff errors. Autograd doesn't really have any tooling in place to track where things are breaking down (particularly in the backward pass). So we'll have to get creative. Here's a plot that shows the norm of the gradient which breaks down due to numerical error with increasing β:
Also, still needs a test.