Using Deep Neural Networks to solve differential equations

Kristine B. Hein


Nov 26, 2019


Short about differential equations

What the neural network must find

One dimensional Poisson equation

$$ \begin{align*} -g''(x) &= f(x), \qquad x \in (0,1) \\ \\ \end{align*} $$ for a given function \( f(x) \) along with some chosen boundary conditions.

In this case:

The trial solution

A possible trial solution: $$ g_{\mathrm{trial}}(x) = x \cdot (1-x) \cdot N(x,P) $$

with \( N(x,P) \) being the output from the network at \( x \) with weights and biases for every layer contained in \( P \)

What we want

We want to find \( g_{\mathrm{trial}} \) such that $$ -g_{\mathrm{trial}}''(x) = f(x) $$ holds as best as possible!

Using TensorFlow to solve our Poisson equation

Run TensorFlow code 1.x in TensorFlow 2

Run old TensorFlow code:

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

Construction phase - set everything up

import tensorflow as tf

tf.set_random_seed(4155)

Nx = 10
x = np.linspace(0,1, Nx)

x_tf = tf.convert_to_tensor(x.reshape(-1,1),dtype=tf.float64)

num_iter = 10000

num_hidden_neurons = [20,10]
num_hidden_layers = np.size(num_hidden_neurons)

Construction phase - construct the network

with tf.name_scope('dnn'):

    # Input layer
    previous_layer = x_tf

    # Hidden layers
    for l in range(num_hidden_layers):
        current_layer = tf.layers.dense(previous_layer, \
                                        num_hidden_neurons[l], \
                                        name='hidden%d'%(l+1), \
                                        activation=tf.nn.sigmoid)
        previous_layer = current_layer

    # Output layer
    dnn_output = tf.layers.dense(previous_layer, 1, name='output')

Construction phase - define the cost function

with tf.name_scope('cost'):
    g_t = x_tf*(1-x_tf)*dnn_output
    d_g_t = tf.gradients(g_t,x_tf)
    d2_g_t = tf.gradients(d_g_t,x_tf)

    # f(x)
    right_side = (3*x_tf + x_tf**2)*tf.exp(x_tf)

    err = tf.square( -d2_g_t[0] - right_side)
    cost = tf.reduce_sum(err, name = 'cost')

Construction phase - specify the optimization method

learning_rate = 0.001
with tf.name_scope('train'):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    traning_op = optimizer.minimize(cost)

Execution phase - train the network and evaluate the final model

g_dnn_tf = None

init = tf.global_variables_initializer()

with tf.Session() as sess:
    init.run()

    for i in range(num_iter):
        sess.run(traning_op)

    g_dnn_tf = g_t.eval()

Solving the equation using finite differences

An approximation of second derivatives: $$ \begin{align*} g''(x_i) \approx \frac{g(x_i + \Delta x) - 2g(x_i) + g(x_i -\Delta x)}{\Delta x^2} \end{align*} $$ for \( \small i = 1, \dots, N_x - 2 \) and \( \small g(x_0) = g(x_{N_x - 1}) = 0 \).

Inserting this into the Poission equation yields the following linear system: $$ \small $$ \begin{aligned} \begin{pmatrix} 2 & -1 & 0 & \dots & 0 \\ -1 & 2 & -1 & \dots & 0 \\ \vdots & & \ddots & & \vdots \\ 0 & \dots & -1 & 2 & -1 \\ 0 & \dots & 0 & -1 & 2\\ \end{pmatrix} \begin{pmatrix} g(x_1) \\ g(x_2) \\ \vdots \\ g(x_{N_x - 3}) \\ g(x_{N_x - 2}) \end{pmatrix} &= \Delta x^2 \begin{pmatrix} f(x_1) \\ f(x_2) \\ \vdots \\ f(x_{N_x - 3}) \\ f(x_{N_x - 2}) \end{pmatrix} \\ A\vec{g} &= \vec{f} \end{aligned} $$ $$

Code

dx = 1/(Nx - 1)

# Set up the matrix A
A = np.zeros((Nx-2,Nx-2))

A[0,0] = 2
A[0,1] = -1

for i in range(1,Nx-3):
    A[i,i-1] = -1
    A[i,i] = 2
    A[i,i+1] = -1

A[Nx - 3, Nx - 4] = -1
A[Nx - 3, Nx - 3] = 2

# Set up the vector f
f_vec = dx**2 * f(x[1:-1])

# Solve the equation
g_res = np.linalg.solve(A,f_vec)

Results: Neural network versus finite differences

The analytical solution can be found analytically in this case: $$ g(x) = x(1 - x)\exp(x) $$

Max absolute difference between the analytical solution and

References

Approaches that was followed in this material to solve differential equations using DNNs:

For using TensorFlow (version 1 of TensorFlow): Theory of partial differential equations: Just a small selection of other approaches:
Made with DocOnce