optimizing 3 variables in a differential equation based on available data point from solution of differential equation

45 views Asked by At

I have following data points (I call them actual data points):

y_data = np.array([0, 32.1463583, 33.1915926, 37.9100309, 39.2501778, 40.8225707, 48])
t_data = np.array([0, 26.75, 72.25, 163.4166667, 209.25, 525, 1250])

and I have a differential equation which is includes y and t:

dy/dt=(1-y)/((a+b*t)*exp(-E/3060.8))

my goal is to optimize a, b , and E such that the solution of the above differential equation best fit my actual data (y_data). to do this. I had the following steps in my mind:

1- set initial guess for a=1, b=20, E=10000 
2- create a for loop ( for 1000 iterations) 
3- solve differential equation using ODEint with initial guess 
4- Find difference between calculated value of y from solution of differential equation  at time equal to "t" with actual "y" value 
5- print error
6- update a to a , b, E and repeat from step 3

and here is my code :

import numpy as np
from scipy.integrate import odeint

# Given data
y_data = np.array([0, 32.1463583, 33.1915926, 37.9100309, 39.2501778, 40.8225707, 48])
t_data = np.array([0, 26.75, 72.25, 163.4166667, 209.25, 525, 1250])

# Initial guess for parameters
a = 1
b = 20
E = 10000

# Number of iterations
iterations = 1000

# Tolerance for convergence
tolerance = 1e-6

# Step size for numerical gradient approximation
h = 1e-6

# Perform optimization
for i in range(iterations):
    # Calculate gradients using numerical gradient approximation
    def calculate_error(a, b, E):
        def model(y, t):
            return (1 - y) / ((a + b * t) * np.exp(-E / 3060.8))
        
        y_solution = odeint(model, y_data[0], t_data)
        error = np.mean(np.abs(y_solution[:, 0] - y_data))
        return error
    print (error)

    grad_a = (calculate_error(a + h, b, E) - calculate_error(a, b, E)) / h
    grad_b = (calculate_error(a, b + h, E) - calculate_error(a, b, E)) / h
    grad_E = (calculate_error(a, b, E + h) - calculate_error(a, b, E)) / h

    # Update parameters
    a -= 0.1 * grad_a
    b -= 0.1 * grad_b
    E -= 0.1 * grad_E

    # Check for convergence
    if abs(grad_a) < tolerance and abs(grad_b) < tolerance and abs(grad_E) < tolerance:
        break

# Output the optimized parameters
print("Optimized Parameters (a, b, E):", a, b, E)

The problem is that this code doesn't work and keep giving me constant error of "32.223947830786685". This is somehow indicating to me that the update function to update a, b and E doesn't work properly. For reference, I used a function in this for

(calculate_error(a + h, b, E) - calculate_error(a, b, E))

to manually calculate the gradient of a, b , E.

Any suggestion as how to fix my issue? or any alternative solution to find bets a, b , E parameters?

0

There are 0 answers