-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ks implementation using ApproxFun and OrdinaryDiffEq #6
base: master
Are you sure you want to change the base?
Conversation
We'll take a look at it. Thanks for the starter code. There's probably a bit more to do with propagating the linear Diagonal handling. |
Yes as @ChrisRackauckas pointed out adding special handling of Diagonal linear operators to the exponential algorithms should help reduce the computation time. Currently Edit: the discussions above only applies to the caching version of the exponential algorithms (which your test code uses). Krylov versions of the algorithms can already utilize the sparse Diagonal operators and can make computation way faster, especially for large
By the way, implicit integrators currently do not support |
For this we want to test |
The newest version of ExponentialUtilities.jl (https://github.com/JuliaDiffEq/ExponentialUtilities.jl/releases/tag/v1.1.0) should fix the speed problem for the exponential integrator: julia> prob, _ = gen_prob(22, 64; dense=false); # original version
julia> prob_dense, _ = gen_prob(22, 64; dense=true); # uses dense `Ax` instead of `Diagonal`
julia> @time sol = solve(prob, ETDRK4(); dt=0.25);
0.020576 seconds (8.87 k allocations: 935.906 KiB)
julia> @time sol = solve(prob_dense, ETDRK4(); dt=0.25);
0.293950 seconds (14.82 k allocations: 157.145 MiB, 7.18% gc time) |
What timings are we comparing against? I see ~10s in https://github.com/johnfgibson/julia-pde-benchmark/blob/master/1-Kuramoto-Sivashinksy-benchmark.ipynb , so that can't be the right numbers. When we start really comparing a different integrator though we need to use work-precision diagrams to take into account the difference of error as well. ETDRK4 can probably have an order of magnitude larger dt than CNAB2 and still get the same error for example, so comparisons at the same error will have pretty large differences in dt that should be accounted for. See https://github.com/JuliaDiffEq/DiffEqBenchmarks.jl for how that's done. But anyways, I think the interesting test is CNAB2. What we want to make sure we can get and show is that a package-based solution to this, using the same algorithm, can match the hand-optimized version. We need a fix to our semilinear handling in our IMEX algorithms (currently our IMEX algorithms are only compatible with nonlinear+nonlinear), but @YingboMa and @MSeeker1340 are working on getting this solved. Since almost all of the time is spent in the linear algebra I don't think this will be a difficult goal to reach, but it will be a milestone for our package ecosystem since the hand-optimized version includes:
and being able to show that we hit all of these features + optimizations is something that will be unique. We're really close! |
I changed the julia> prob, _ = gen_prob(10^4/16*pi, 10^4; dense=false);
julia> @time sol = solve(prob, ETDRK4(); dt=1/16);
6.583057 seconds (1.40 M allocations: 866.530 MiB, 1.65% gc time) This is about one order of magnitude slower than the benchmark's julia (I assume it uses CNAB2?), but yes as you pointed out with ETDRK4 the time step can be larger, and using Krylov instead of dense caching can also save time. |
Are you sure about that? At this size I assume that dense caching would be faster? The operators are constructed once and then it's an explicit loop with matrix multiplies. |
Ah you're right. My point is correct in regular cases because computing |
I fixed the example. Beside finding an error in my code, which I fixed in my code by defining the derivative matrix myself. |
Hello there,
I saw that you were interested in an implementation of the Kuramoto-Sivashinsky model
using
ApproxFun
andOrdinaryDiffEq
.I played around with this for a while but sadly haven't gotten it quite right.
Since I spent a bunch of hours on it, I wanted to share it anyway in case someone else wants to fix it.
Computation is currently slow and solutions decay to 0.
The first can likely be fixed by someone who knows more about these solvers than I do
and the latter seems to be an issue of the wavenumbers
k
. (resulting in too much diffusion ?)Note that
Fourier()
is a real valuedsin
&cos
decomposition.