diff --git a/docs/fig/upwind_benchmark.png b/docs/src/img/upwind_benchmark.png similarity index 100% rename from docs/fig/upwind_benchmark.png rename to docs/src/img/upwind_benchmark.png diff --git a/docs/src/index.md b/docs/src/index.md index 529cdd3..46be472 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -9,6 +9,17 @@ TrixiEnzyme is not a registered Julia package, and it can be installed by runnin ] add https://github.com/junyixu/TrixiEnzyme.jl.git ``` +## Notes about Enzyme + +There's some issues with `Enzyme.make_zero!` right now for this use case, see https://github.com/EnzymeAD/Enzyme.jl/issues/1661 +One need to be careful with a vanilla closure outside Enzyme. +If one writes to caches and expect to differentiate through them, then the closure should be duplicated for handling the derivative of those values. +If you want to track derivatives through arrays that are enclosed, you have to duplicate the array to have shadow memory for its differentiation. +if you want to track derivatives through arrays that are enclosed, you have to duplicate the array to have shadow memory for its differentiation +So if you only have the original memory, you cannot do the differentiation since you don't have a place to store the extra values. In a simplified sense, a `Dual{Float64}` is 128 bits, `Float64` is 64 bits, so if you're writing to a buffer of 5 `Float64` numbers, you need 5*2*64 bits of space to keep a dual number, which you don't have +So the best thing to do for a user would be to separate out the things that you need to track through, make them arguments to the function, and then simply Duplicate on those. +This is how [TrixiEnzyme.jacobian_enzyme_forward](https://junyixu.github.io/TrixiEnzyme.jl/dev/api.html#TrixiEnzyme.jacobian_enzyme_forward) works. + ## Configuring Batch Size To utilize `Enzyme.BatchDuplicated`, one can create a tuple containing duals (or shadows). @@ -45,5 +56,5 @@ julia> @time jacobian_enzyme_forward(TrixiEnzyme.upwind!, x); ``` Benchmark for a 401x401 Jacobian of `TrixiEnzyme.upwind!` (Lower is better): -![upwind benchmark](../fig/upwind_benchmark.png) +![upwind benchmark](./img/upwind_benchmark.png) `Enyme(@batch)` means applying `Polyester.@batch` to `middlebatches`. diff --git a/src/jacobian.jl b/src/jacobian.jl index bdc431d..e3e557c 100644 --- a/src/jacobian.jl +++ b/src/jacobian.jl @@ -1,5 +1,10 @@ Enzyme.API.runtimeActivity!(true) +``` + jacobian_enzyme_forward_closure(semi::SemidiscretizationHyperbolic) + +Same as jacobian_enzyme_forward but with closure. +``` function jacobian_enzyme_forward_closure(semi) t0 = zero(real(semi)) u_ode = compute_coefficients(t0, semi) @@ -32,7 +37,14 @@ function jacobian_enzyme_forward_closure(semi) return dys end +``` + jacobian_enzyme_reverse_closure(semi::SemidiscretizationHyperbolic) + +Same as jacobian_enzyme_reverse but with closure +!!! warning +Enzyme.jl does not play well with Polyester.jl and there are no plans to fix this soon. +``` function jacobian_enzyme_reverse_closure(semi) t0 = zero(real(semi)) u_ode = compute_coefficients(t0, semi) diff --git a/test/SemiTest.jl b/test/SemiTest.jl index 958039b..f1069d7 100644 --- a/test/SemiTest.jl +++ b/test/SemiTest.jl @@ -1,6 +1,7 @@ module SemiTest using Test using TrixiEnzyme +using TrixiEnzyme: LinearScalarAdvectionEquation1D, DGSEM, TreeMesh, SVector # %% # equation with a advection_velocity of `1`.