Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First-order subpixel smoothing for density-based TO #2741

Merged
merged 3 commits into from
Jan 18, 2024

Conversation

smartalecH
Copy link
Collaborator

@smartalecH smartalecH commented Dec 15, 2023

Here we implement first-order subpixel smoothing for density-based TO. This approach allows us to treat the density formulation as a level set, such that the user can now continuously increase β→∞.

This approach is 100% in python and leverages autograd to do all of the backpropagation. It's very straightforward, but currently only works for 2D degrees of freedom. Adding capability for 1D and 3D is trivial -- we just need to ensure the filters work in those dimension, and we need to add in the right fill factor kernel (analytically derived by assuming the smoothing kernel is a sphere). This is a "simple version" of what's implemented in #1951

Here's an example:

image

While the approach works well, it is still sensitive to numerical roundoff errors. Autograd doesn't really have any tooling in place to track where things are breaking down (particularly in the backward pass). So we'll have to get creative. Here's a plot that shows the norm of the gradient which breaks down due to numerical error with increasing β:

image

Also, still needs a test.

@codecov-commenter
Copy link

codecov-commenter commented Dec 15, 2023

Codecov Report

Attention: 21 lines in your changes are missing coverage. Please review.

Comparison is base (295fa63) 74.06% compared to head (7fb6c6b) 73.50%.
Report is 8 commits behind head on master.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2741      +/-   ##
==========================================
- Coverage   74.06%   73.50%   -0.56%     
==========================================
  Files          18       18              
  Lines        5395     5549     +154     
==========================================
+ Hits         3996     4079      +83     
- Misses       1399     1470      +71     
Files Coverage Δ
python/adjoint/filters.py 69.76% <4.54%> (-7.44%) ⬇️

... and 1 file with indirect coverage changes

@oskooi
Copy link
Collaborator

oskooi commented Dec 16, 2023

Also, still needs a test.

I suppose a test for this feature would be based on @stevengj's proposal in #1854 (comment) but this time demonstrating first-order convergence.

@smartalecH
Copy link
Collaborator Author

but this time demonstrating first-order convergence.

I don't think we really need to test the first-order convergence, since that's not really the feature here. Rather we should probably check the accuracy of the gradient for a few finite betas, and when beta = np.inf. Maybe a few tests that check for correctness, type sanity too, etc.

The real novelty with this PR is the ability to differentiate with any beta. So that's probably the test we should implement.

@stevengj
Copy link
Collaborator

stevengj commented Dec 17, 2023

So we'll have to get creative. Here's a plot that shows the norm of the gradient which breaks down due to numerical error with increasing β:

What's the source of this error?

Why does the gradient seem to be blowing up as β increases? Isn't the point of smoothing to get a finite nonzero gradient?

@smartalecH
Copy link
Collaborator Author

smartalecH commented Dec 17, 2023

Isn't the point of smoothing to get a finite nonzero gradient?

Right, exactly. I would have expected it to be monotonic too, but maybe there's no reason for it to be monotonic (but it should smoothly converge at least to something finite).

What's the source of this error?

So during the backward pass, autograd's numpy throws some overflow warnings for a few different functions. But again, the tooling and logging capabilities for autograd are really nonexistent, so debugging will be pretty manual.

But if I had to guess, here are some areas where things could blow up:

  • Computing d, the distance to an interface, which requires dividing by the norm of the spatial gradient.
  • The meep tanh_projection function has, as an argument to the tanh, the product of beta (a big number) with other small numbers. There could be some stuff going on here downstream within the tanh (particularly with the backward pass).
  • The fill factor formula has some ugly arccos and sqrt business. We use a numpy where function to filter out domain violations, but still...

(In the docstring, I have an example which can be used to recreate the gradient norm plot above -- just sweep beta and use the autograd grad function).

@smartalecH
Copy link
Collaborator Author

Alrighty, I think I resolved all the above issues. Just had to implement a few "double where" tricks to sanitize the backprop. (also had a bug in the effective material part of the algorithm). The smoothed materials themselves look much better:

image

And, if we do a convergence check as β→∞, we see that the norm of the gradient converges smoothly to a finite, non-zero value:

image

I've also checked the pathological case where we have a uniform design field (such that the spatial gradient is zero). Normally, this would create a divide by zero error when computing the distance to the interface, and further complicate the backward pass. But I've sanitized things enough that everything looks good.

This should be a drop-in replacement for the tanh_projection() function, with the requirement of an additional resolution argument (to compute the spatial gradient).

Any ideas for a test?

@oskooi oskooi merged commit 04fa305 into master Jan 18, 2024
10 checks passed
@oskooi oskooi deleted the subpixel_levelset_simple branch January 18, 2024 20:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants