-
Notifications
You must be signed in to change notification settings - Fork 633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
place fourier adjoint sources collectively for each frequency #1821
Conversation
How does this PR relate to #1547? Does that PR need to work/be merged first? Are you locally testing this PR on top of that PR? |
The PR in #1547 could improve the accuracy of this PR in some circumstances.
No. This PR can work/be merged without the PR in #1547, but the PR in #1547 needs this PR to work/be merged at first.
This PR has been tested separately from that PR. I also combined the two PRs, but the combined PR does not work at present as described in #1547. |
It looks like it's passing all tests online? Perhaps you need to rebase your local build to ensure they pass locally too. |
I ran the built-in tests only locally. |
python/adjoint/objective.py
Outdated
thickness_indicator = self.sim.fields.indicate_thick_dims(self.volume.swigobj) | ||
min_max_corners = self.sim.fields.get_corners(self._monitor.swigobj, self.component) # ivec values of the corners | ||
raw_size = (min_max_corners[3:6]-min_max_corners[0:3])/2+np.ones(3) | ||
true_size = np.array([raw_size[i]*(thickness_indicator[i]==1)+(thickness_indicator[i]==0) for i in range(3)],dtype=int) # monitor size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would probably be a good idea to refactor this piece into an external function call since we'll be using it in some of the other objective quantities too (eventually).
Something similar to ObjectiveQuantity._adj_src_scale()
. Maybe ObjectiveQuantity._get_dft_src_shape()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a name like _dft_monitor_size
sounds more pertinent. The function would be
def _dft_monitor_size(self,min_max_corners):
thickness_indicator = self.sim.fields.indicate_thick_dims(self.volume.swigobj)
raw_size = (min_max_corners[3:6]-min_max_corners[0:3])/2+np.ones(3)
true_size = np.array([raw_size[i]*(thickness_indicator[i]==1)+(thickness_indicator[i]==0) for i in range(3)],dtype=int)
return true_size
Line 252, which computes min_max_corners
, is not encapsulated here, because min_max_corners
is needed by fourier_sourcedata
later.
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #1821 +/- ##
==========================================
- Coverage 73.14% 73.09% -0.05%
==========================================
Files 17 17
Lines 4923 4918 -5
==========================================
- Hits 3601 3595 -6
- Misses 1322 1323 +1
|
The failing test is the same failing test described here. For reference, here are the values comparing the adjoint gradients with the finite differences:
The finite-difference gradients seem to be off (again). |
It seems that this issue has been resolved. Should I rebuild this PR on the current master branch and upload again? |
I retriggered the tests, but they are still failing. It's possible that the fix implemented in #1780 wasn't sufficient to resolve the issue with the new DFT adjoint sources. In which case, try building with single-precision locally and play with the finite-difference tolerance to see if that really is the issue (as described above). Make sure you rebase onto master when building locally. |
I rebased this PR on the current master branch locally with For comparision, I also compiled this PR without In both cases, Python 3.7.3 was used. The finite-difference gradients seem still off. |
The finite-difference gradients are a bit flaky, particularly when single-precision is enabled and the new subpixel smoothing routine is turned on. One issue we discussed was that the new default for Maybe try filtering the "random" design field before running the finite-difference (and corresponding adjoint) simulations. In other words, change the line meep/python/tests/test_adjoint_solver.py Line 495 in f9bd757
to something like S12_unperturbed = forward_simulation_damping(mpa.conic_filter(p,<<put parameters here...>>), frequencies) and the same for the second part of the finite-difference calculation: meep/python/tests/test_adjoint_solver.py Line 502 in f9bd757
Some other things we can try:
|
The failing check at test_damping was not caused by this PR, because
If we filter the design field in test_damping as follows, the check will be successful in the previous master branch as well as in this PR based on that. def test_damping(self):
print("*** TESTING CONDUCTIVITIES ***")
for frequencies in [[1/1.58, fcen, 1/1.53]]:
## filter the design field
filter_radius = 0.12
p_filtered = mpa.conic_filter(
np.reshape(p,(Nx,Ny)),filter_radius,design_region_size.x,design_region_size.y,design_region_resolution).flatten()
p_plus_dp_filtered = mpa.conic_filter(
np.reshape(p+dp,(Nx,Ny)),filter_radius,design_region_size.x,design_region_size.y,design_region_resolution).flatten()
## compute gradient using adjoint solver
adjsol_obj, adjsol_grad = adjoint_solver_damping(p_filtered, frequencies)
## compute unperturbed S12
S12_unperturbed = forward_simulation_damping(p_filtered, frequencies)
## compare objective results
print("S12 -- adjoint solver: {}, traditional simulation: {}".format(adjsol_obj,S12_unperturbed))
self.assertClose(adjsol_obj,S12_unperturbed,epsilon=1e-6)
## compute perturbed S12
S12_perturbed = forward_simulation_damping(p_plus_dp_filtered, frequencies)
## compare gradients
if adjsol_grad.ndim < 2:
adjsol_grad = np.expand_dims(adjsol_grad,axis=1)
adj_scale = (dp[None,:]@adjsol_grad).flatten()
fd_grad = S12_perturbed-S12_unperturbed
print("Directional derivative -- adjoint solver: {}, FD: {}".format(adj_scale,fd_grad))
tol = 0.06 if mp.is_single_precision() else 0.03
self.assertClose(adj_scale,fd_grad,epsilon=tol) However, doing so leads to a failing check for double precision. Moreover, the current master branch also fails the checks in test_adjoint_solver.py, which may not be caused by the same reason. |
Hmm, I'm not sure I follow. The current master branch appears to be passing all tests (including the recent addition you mention). Are you saying that when you build
Can you be more specific? Again, |
Yes, I built the master branch on a local machine with
I need to correct this statement. The reason should be the same, because the message given by the failing check at
This failing check at |
94533d9
to
c41747a
Compare
Even after the force push it still seems to be failing. Maybe take a closer look at exactly what's failing, as it's most likely due to something new in this PR (and not an out of date conflicting commit) (Note you can download the python logs from the CI to see exactly where things are failing) |
In the log I saw the failure happened at |
Yes, the details are here. Note that the
Also, the fields are blowing up during the complex |
The version of Python seems to matter a lot. I used Python 3.7.3 locally, and there were no failing tests of the
It seems that the diverging field in the tests of complex fields is also caused by the
I do not think this PR impacts |
You're right, I misread the log. The eigenmode tests seem to be unaffected by this (as expected)
Are you sure it's the Python version and not the single-precision that's causing the issue? It seems odd that the python version would affect anything in this PR... It's especially odd that the adjoint run for the complex case blows up... |
I am not sure. I guess that the Python version matters, only because those failures of
Diverging results also appeared previously, as mentioned in #1676. In an example there, the problem also appears without using the adjoint solver. I do not know whether these problems are caused by the same reason. |
Hmm, I'm guessing your local branch doesn't line up with this PR+ |
I just tested this PR locally again based on the current master branch. The check now fails at the same place as the most recent online check. By the way, I also checked the current master branch locally with single precision again, and saw that it still fails |
In addition to the diverging fields and the inconsistency between the gradients, the adjoint gradients may be different in different runs. These problems happen only when this PR is compiled with The precision issue has already been handled in |
Can you provide a MWE that illustrates the problem? Something simpler than the entire test suite. I can then pull your branch and look into it locally. |
An example is as follows.
With double precision, the results are
With single precision, the adjoint gradients vary among different runs and do not match the finite-difference gradients. |
This will happen if not all of the processes are calling |
frequencies. Because of linearity, we can sum across the | ||
second frequency dimension to calculate a frequency | ||
scale factor for each point (rather than a scale vector). | ||
''' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep this comment, it's helpful!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@smartalecH, could you explain what is going on here? Why can you sum over frequency if the objective function is vector-valued?
volume *where = &v; // use full volume of fields | ||
bufsz = 0; | ||
min_corner = gv.round_vec(where->get_max_corner()) + one_ivec(gv.dim); | ||
max_corner = gv.round_vec(where->get_min_corner()) - one_ivec(gv.dim); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just use v.get_max_corner()
etcetera and eliminate where
completely?
… and tweak calculate_fd_gradient
… and some other minor changes
b8b2c86
to
5a39da6
Compare
Looks like this PR, particularly the contents of |
The main features of this PR are
True
orFalse
. The default option isFalse
.This PR passes all the built-in tests except test_adjoint_jax.py, test_binary_partition_utils.py, and test_chunk_balancer.py, as the current master branch does.