diff --git a/doc/docs/Python_Tutorials/Adjoint_Solver.md b/doc/docs/Python_Tutorials/Adjoint_Solver.md index f18e6d323..ccb7bc9cc 100644 --- a/doc/docs/Python_Tutorials/Adjoint_Solver.md +++ b/doc/docs/Python_Tutorials/Adjoint_Solver.md @@ -1201,7 +1201,6 @@ The stack consists of two materials of alternating refractive index $n_A$ = 1.3 A reference design to compare our results against is a [quarter-wavelength stack](https://en.wikipedia.org/wiki/Distributed_Bragg_reflector). The mean wavelength of $\lambda_1$ and $\lambda_2$ is $\lambda = 1.0 μm for which the quarter-wavelength layer thicknesses are $\lambda / (4 n_A)$ = 0.19 μm and $\lambda / (4 n_B)$ = 0.25 μm. These values are used to specify upper and lower bounds for the layer thicknesses. This is important given the non-convex nature of this particular design problem. - The worst-case optimization is implemented using the [epigraph formulation](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/#equivalent-formulations-of-optimization-problems) and the Conservative Convex Separable Approximation (CCSA) algorithm. The optimization is run ten times with random initial designs from which the local optima with the smallest objective value is chosen. For the run involving the best design, a plot of the objective function vs. iteration number for the two wavelengths is shown below. Also included in this plot is the epigraph variable. These results demonstrate that the stack is being optimized for $\lambda$ = 0.95 μm and *not* $\lambda$ = 1.05 μm. We observed this trend for various local optima. The optimizer converges to the local optima using 13 iterations. The nine layer thicknesses (μm) of this design are: 0.1822, 0.2369, 0.1822, 0.2369, 0.1822, 0.2369, 0.1910, 0.2504, 0.1931. Since these are 1D simulations, the runtime for each design iteration is generally fast (about ten seconds using one core of an Intel Xeon i7-7700K CPU @ 4.20GHz). In some cases (not shown), the optimizer may take some steps that make things *worse* during the early iterations. This is a common phenomenon in many algorithms — initially, the optimizer takes steps that are too large and then has to backtrack. After a few iterations, the algorithm has more information about a "good" step size.