The Euler methods are some of the simplest methods to solve ordinary differential equations numerically. They introduce a new set of methods called the Runge Kutta methods, which will be discussed in the near future!
As a physicist, I tend to understand things through methods that I have learned before. In this case, it makes sense for me to see Euler methods as extensions of the Taylor Series Expansion. These expansions basically approximate functions based on their derivatives, like so:
Like before,
So, what does this mean? Well, as mentioned, we can think of this similarly to the kinematic equation:
$$
x = x_0 + vt + \frac{1}{2}at^2
$$
where
Now, how does this relate to the Euler methods?
Well, with these methods, we assume that we are looking for a position in some space, usually denoted as
So, we can iteratively solve for position by first solving for velocity. By following the kinematic equation (or Taylor Series Expansion), we find that
For any timestep
Now, solving this set of equations in this way is known as the forward Euler Method. In fact, there is another method known as the backward Euler Method, which we will get to soon enough. For now, it is important to note that the error of these methods depend on the timestep chosen.
For example, here we see dramatically different results for different timesteps for solving the ODE
As we mentioned, the forward Euler method approximates the solution to an Ordinary Differential Equation (ODE) by using only the first derivative. This is (rather expectedly) a poor approximation. In fact, the approximation is so poor that the error associated with running this algorithm can add up and result in incredibly incorrect results. As you might imagine, the only solution to this is decreasing the timestep and hoping for the best or using a similar method with different stability regions, like the backward Euler method.
Let's assume we are solving a simple ODE:
Like above, the blue line is the analytical solution, the green is with a timestep of 0.5 and the red is with a timestep of 1. Here, it's interesting that we see 2 different instability patterns. The green is initially unstable, but converges onto the correct solution, but the red is wrong from the get-go and only gets more wrong as time goes on.
In truth, the stability region of the forward Euler method for the case where
Now, here is where we might want to relate the method to another algorithm that is sometimes used for a similar use-case: Verlet Integration. Verlet integration has a distinct advantage over the forward Euler method in both error and stability with more coarse-grained timesteps; however, Euler methods are powerful in that they may be used for cases other than simple kinematics. That said, in practice, due to the instability of the forward Euler method and the error with larger timesteps, this method is rarely used in practice. That said, variations of this method are certainly used (for example Crank-Nicolson and Runge-Kutta, so the time spent reading this chapter is not a total waste!
Like in the case of Verlet Integration, the easiest way to test to see if this method works is to test it against a simple test-case.
Here, the most obvious test-case would be dropping a ball from 5 meters, which is my favorite example, but proved itself to be slightly less enlightening than I would have thought.
So, this time, let's remove ourselves from any physics and instead solve the following ODE:
{% method %} {% sample lang="jl" %} import, lang:"julia" {% sample lang="c" %} import, lang:"c_cpp" {% sample lang="cpp" %} import, lang:"c_cpp" {% sample lang="rs" %} import, lang:"rust" {% sample lang="elm" %} import:44-54, lang:"elm" import:193-210, lang:"elm"
Full code for the visualization follows: import, lang:"elm"
{% sample lang="py" %} import, lang:"python" {% sample lang="hs" %} import, lang:"haskell" {% sample lang="m" %} import, lang:"matlab" {% sample lang="swift" %} import, lang:"swift" {% endmethod %}
<script> MathJax.Hub.Queue(["Typeset",MathJax.Hub]); </script>