Optimum Time step for Verlet's method to solve Damped Simple Harmonic Motion ODE

41 views Asked by At

I am using different numerical methods to solve the Damped Simple Harmonic Oscillator ODE. I am comparing the performance of each integration method by calculating the Mean Absolute Error of the predicted energy with the analytical solution:
$$ MAE = \frac{1}{n} \sum_{i=0}^{n}\left |y_{analytical}-y_{numerical}\right| $$

Then for different time-steps I am calculating the resulting MAE and plotting the results in a log vs. log plot as shown below.

log (MAE) vs. log(Time_step)

The relation between MAE and time-step matches my expectations (the Verlet Method scales quadratically and the Euler-Cromer method scales linearly), but I am noticing that the Verlet method has a turning point at about 10^(-4) s. This seems slightly too large and I was expecting instead a turning point to arise at time-steps closer to 10^(-8) s as I am using numpy's float64, hence there are about 15 to 17 decimal places of precision.

I went onto plot the maximum and minimum errors obtained for each time step (Excluding iteration 0 as those are the initial conditions which are the same for both numerical and analytical methods) and these are the results: log (Min Err) vs. log(Time_step) and log (Max Err) vs. log(Time_step)

Again when plotting the maximum error I obtain an optimum time step of similar value compared to the previous plot, but plotting the minimum obtained error (these always happened in the first few iterations after the initial conditions) I obtain that the errors seem to flatten out at 10^(-4) s and approach errors of about 10^(-15) J in the energy.

Because of this flattening of the minimum errors, it makes sense that going further than 10^(-4) s does not increase the precision of the Verlet's method, but I cant explain why the maximum errors grow after this point.

An explanation that comes to mind is the round off error caused by float64 that should happen when values reach about 10^(-16). I have manually checked the position, velocity and acceleration that result from running the Verlet method but their lowest values are of order 10^(-9), very far from 10^(-16).

(1) Is it possible that I am introducing a round off error when I am calculating the residual error from the analytical and the Verlet's method?

(2) Are there other more appropriate ways of calculating the error? (I thought MAE was a good fit because the verlet method oscillates about the true system values)

(3) Are there tweaks that could be done to show possible flaws within my analysis, I have looked at my code extensively and I am not able to find any bugs, furthermore, the Verlet method I coded does have an error which scales quadratically with time step which makes me think that the code itself is fine. (Maybe a possible attempt would be to use float128 and ensure its used throughout all calculations and then see if the above plots differ?)

Thanks in advance for any help with the above questions

Here is how I am calculating the MAE:

def calculate_mean_absolute_error(numerical_method_energy,
analytical_method_energy): 
    residual = np.abs(analytical_method_energy - numerical_method_energy)

    mean_absolute_error = sum(residual) / len(residual)
    
    return mean_absolute_error
1

There are 1 answers

2
Lutz Lehmann On

This is completely as to be expected when computing and comparing the errors of a first and second order method.

The computed error has two influences, the numerical or theoretical step error of the method and the floating-point error of the actual computation, mainly the addition of the single step updates to the numerical solution. The theoretical error per unit step is ~h^p, while the floating point error per unit step is proportional to the number of steps 1/h times the error of the function evaluation, usually a small multiple of the machine constant mu.

The hook or kink of the V-shape is where these errors balance, thus at h ~ mu^(1/(p+1)). For a second order method this is at step size 10^(-5) with an error of 10^(-10), for a first-order method at step size 10^(-8) with error 10^(-8) using the standard double precision type.

Not included in the above are the scale or magnitude of function values and derivatives of the ODE function. If these are substantially different from 1, the optimal step size h will be shifted. In general the Lipschitz constant L of the ODE is sufficient to account for this shift, with it the product Lh should have the above indicated optimal values.

More details in https://math.stackexchange.com/questions/3609842/numerical-analysis-heuns-method