﻿ 求解随机常微分方程的深度学习法 Deep Learning Method for Solving Stochastic Ordinary Differential Equations

Vol. 11  No. 09 ( 2022 ), Article ID: 55738 , 11 pages
10.12677/AAM.2022.119670

Deep Learning Method for Solving Stochastic Ordinary Differential Equations

Baolong Sun, Mingzhi Yang, Qiang Li, Yangtian Yan, Yunpeng Wei

School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang Henan

Received: Aug. 9th, 2022; accepted: Sep. 2nd, 2022; published: Sep. 14th, 2022

ABSTRACT

Stochastic differential equations are new equations that arise by adding non-deterministic stochastic terms to deterministic differential equations. It plays a pivotal role in describing objective phenomena. Therefore, it is particularly important to study the form and properties of the solutions of stochastic differential equations. However, in general, the analytical solutions of stochastic differential equations cannot be found. Therefore, it is particularly important to find the numerical solutions of stochastic differential equations. In this paper, based on the idea of Euler’s method, the sample set for deep learning training is successfully constructed, and the slope of each round of prediction with the initial condition is used to realize the construction of loss function, and the deep learning method is successfully applied to solve the ordinary differential equation, and the deep learning results with better accuracy than Euler’s method are obtained; on this basis, the average of the slope of small interval sample set is used instead of the slope of the initial point of small interval of Euler’s method, and the combination of based on this, the iterative format of the deep learning method is constructed using the average of the small interval sample set instead of the initial slope of the small interval of the Euler method, and combined with the idea of Milstein method, and successfully applied to solve a specific stochastic differential equation (Black- Scholes equation). Numerical results show that the constructed deep learning method is more accurate than the conventional Euler’s and Milstein’s methods.

Keywords:Stochastic Differential Equations, Deep Learning Method, Milstein Method, Euler Method

1. 引言

2. 预备知识

2.1. 布朗运动

1) 样本函数 ${B}_{t}$ 是连续的，对于几乎全部的 $\omega \in \Omega$ 同时 ${B}_{0}=0$

2) 对所有的符合条件 $0\le s\le t$ 的实数 $s,t$${B}_{t}-{B}_{s}$ 独立于过去的状态 ${B}_{u},0\le u\le s,{B}_{0}=0$

3) 当 $0\le s\le t$ 时， ${B}_{t}-{B}_{s}$ 服从正态分布 $N\left(0,t-s\right)$，且该正态分布均值为0，方差为 $t-s$

2.2. 伊藤随机微分方程 及解的定义

$\text{d}x\left(t\right)=f\left(t,x\left(t\right)\right)\text{d}t,t\in \left[{t}_{0},T\right]$ (2.1)

$\text{d}x\left(t\right)=f\left(t,x\left(t\right)\right)\text{d}t+g\left(t,x\left(t\right)\right)\text{d}W\left(t\right),t\in J$ (2.2)

$\text{d}x\left(t\right)=f\left(x\left(t\right)\right)\text{d}t+g\left(x\left(t\right)\right)\text{d}W\left(t\right)$ (2.3)

1) $x\left(t\right)$ 为连续适应过程

2) $f\left(t,x\left(t\right)\right)\in {D}^{1}\left(J,{R}^{d}\right),g\left(t,x\left(t\right)\right)\in {D}^{2}\left(J,{R}^{d×m}\right),J=\left[{t}_{0},T\right]$

3) $x\left(t\right)$ 满足随机微分方程(2.2)

2.3. 深度学习方法

2.3.1. 神经元模型

$f\left(x\right)=\frac{{\text{e}}^{x}-{\text{e}}^{-x}}{{\text{e}}^{x}+{\text{e}}^{-x}}$ (2.4)

Figure 1. Single neuron model

2.3.2. 正向传播

Figure 2. Single hidden layer model

2.3.3. 反向传播调参

2.4. 深度学习求解显式常微分方程

torch.autograd.grad可以对第k轮产生的数组 ${\left\{\left({t}_{i},{x}_{i}^{k}\right)\right\}}_{i=1}^{M}$ 进行求微分运算并且与具体的微分值 ${\left\{f\left({t}_{i}\right)\right\}}_{i=1}^{M}$

$los{s}^{k}=\left({\sum }_{i=1}^{M}{\left[\frac{\text{d}{x}_{i}^{k}}{\text{d}{t}_{i}}-f\left({t}_{i}\right)\right]}^{2}\right)/M+{\left({x}_{1}^{k}-{x}_{1}\right)}^{2}$ (2.5)

$n=\sqrt{0.43{n}_{1}{n}_{0}+0.12{n}_{0}^{2}+2.54{n}_{1}+0.77{n}_{0}+0.35}+0.51$ (2.6)

$los{s}^{k}=\left({\sum }_{i=1}^{M}{\left[\frac{\text{d}{x}_{i}^{k}}{\text{d}{t}_{i}}-f\left({t}_{i},{x}_{i}^{k}\right)\right]}^{2}\right)/M+{\left({x}_{1}^{k}-{x}_{1}\right)}^{2}$ (2.7)

Figure 3. The structure of neural network

Figure 4. Numerical solution for different number of neurons

Figure 5. Numerical solution for different hidden layers

Figure 6. Error comparison of numerical solutions of ordinary differential equations

3. 随机常微分方程数值解

${t}_{k}$ 处的值为 ${x}_{k}$，并记 $\Delta {W}_{k}={W}_{k}-{W}_{k-1},{W}_{k}~N\left(0,h\right)$。则下面列出三种迭代格式。

${x}_{k+1}={x}_{k}+f\left({x}_{k}\right)h+g\left({x}_{k}\right)\Delta {W}_{k},k=1,\cdots ,N-1$ (3.1)

${x}_{k+1}={x}_{k}+f\left({x}_{k}\right)h+g\left({x}_{k}\right)\Delta {W}_{k}+\left[g\left({\stackrel{˜}{x}}_{k}\right)-g\left({x}_{k}\right)\right]\left[{\left(\Delta {W}_{k}\right)}^{2}-\Delta t\right]/\left(2\sqrt{h}\right),k=1,\cdots ,N-1$ (3.2)

${x}_{k+1}={x}_{k}+gra{d}^{*}h+g\left({x}_{k}\right)\Delta {W}_{k}+\left[g\left({\stackrel{˜}{x}}_{k}\right)-g\left({x}_{k}\right)\right]\left[{\left(\Delta {W}_{k}\right)}^{2}-\Delta t\right]/\left(2\sqrt{h}\right),k=1,\cdots ,N-1$ (3.3)

Figure 7. Deep learning flowchart for stochastic ordinary differential equations

4. 数值实验

$\text{d}x\left(t\right)=ax\left(t\right)\text{d}t+bx\left(t\right)\text{d}W\left(t\right),t\in \left[{t}_{0},T\right]$ (4.1)

$x\left(t\right)={x}_{0}\ast \mathrm{exp}\left(\left(a-0.5{b}^{2}\right)t+bW\left(t\right)\right)$ (4.2)

1) 生成标准布朗运动 ${\left\{{w}_{i}\right\}}_{i=1}^{N}$ 以及时间序列 ${\left\{{x}_{i}\right\}}_{i=1}^{N}$

2) $x\left(t\right)={x}_{0}\ast \mathrm{exp}\left(1.375{t}_{i}+0.8W\left({t}_{i}\right)\right)$

3) 遍历 $i=1,\cdots ,N$，获得解析解序列 ${\left\{{x}_{i}\right\}}_{i=1}^{N}$

Figure 8. Simulation results of three methods and analytical solutions

Table 1. When N = 40, the error data of the three methods

Table 2. When N = 80, the error data of the three methods

Table 3. When N = 100, the error data of the three methods

Table 4. When N = 300, the error data of the three methods

5. 结论

Deep Learning Method for Solving Stochastic Ordinary Differential Equations[J]. 应用数学进展, 2022, 11(09): 6342-6352. https://doi.org/10.12677/AAM.2022.119670

1. 1. Kloeden, P.E. and Platen, E. (1992) Stochastic Differential Equations. In: Kloeden, P.E. and Platen, E., Eds., Numerical Solution of Stochastic Differential Equations, Springer, Berlin, 103-160. https://doi.org/10.1007/978-3-662-12616-5_4

2. 2. Hutzenthaler, M., Jentzen, A. and Kloeden, P.E. (2011) Strong and Weak Divergence in Finite Time of Euler’s Method for Stochastic Differential Equations with Non-Globally Lip-schitz Continuous Coefficients. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 467, 1563-1576. https://doi.org/10.1098/rspa.2010.0348

3. 3. Dormand, J.R. and Prince, P.J. (1980) A Family of Embedded Runge-Kutta Formulae. Journal of Computational and Applied Mathematics, 6, 19-26. https://doi.org/10.1016/0771-050X(80)90013-3

4. 4. Efendiev, B.I. (2020) Cauchy Problem for an Ordinary Dif-ferential Equation with a Distributed-Order Differentiation Operator. Differential Equations, 56, 658-670. https://doi.org/10.1134/S0012266120050110

5. 5. Bismut, J.M. (1973) Conjugate Convex Functions in Optimal Stochastic Control. Journal of Mathematical Analysis and Applications, 44, 384-404. https://doi.org/10.1016/0022-247X(73)90066-8

6. 6. Henry-Labordere, P., Tan, X. and Touzi, N. (2014) A Nu-merical Algorithm for a Class of BSDEs via the Branching Process. Stochastic Processes and Their Applications, 124, 1112-1140. https://doi.org/10.1016/j.spa.2013.10.005

7. 7. Weinan, E., Hutzenthaler, M., Jentzen, A., et al. (2017) Linear Scaling Algorithms for Solving High-Dimensional Nonlinear Parabolic Differential Equations. SAM Research Report.

8. 8. Raissi, M., Perdikaris, P. and Karniadakis, G.E. (2017) Physics Informed Deep Learning (Part I): Da-ta-Driven Solutions of Nonlinear Partial Differential Equations.

9. 9. Ling, J., Kurzawski, A. and Templeton, J. (2016) Reynolds Averaged Turbulence Modelling Using Deep Neural Networks with Embedded Invariance. Journal of Fluid Mechanics, 807, 155-166. https://doi.org/10.1017/jfm.2016.615

10. 10. Tompson, J., Schlachter, K., Sprechmann, P., et al. (2017) Accelerating Eulerian Fluid Simulation with Convolutional Networks. International Conference on Machine Learning, Sydney, 6-11 August 2017, 3424-3433.

11. 11. Jin, W., Li, Z.J., Wei, L.S., et al. (2000) The Improvements of BP Neural Network Learning Algorithm. 2000 5th International Conference on Signal Processing Proceedings, Vol. 3, 1647-1649.

12. 12. Samek, W., Binder, A., Montavon, G., et al. (2016) Evaluating the Visualization of What a Deep Neu-ral Network Has Learned. IEEE Transactions on Neural Networks and Learning Systems, 28, 2660-2673. https://doi.org/10.1109/TNNLS.2016.2599820

13. 13. Lagaris, I.E., Likas, A. and, D.I. (1998) Artificial Neural Net-works for Solving Ordinary and Partial Differential Equations. IEEE Transactions Fotiadis on Neural Networks, 9, 987-1000. https://doi.org/10.1109/72.712178

14. 14. Li, J., Cheng, J., Shi, J., et al. (2012) Brief Introduction of Back Propagation (BP) Neural Network Algorithm and Its Improvement. In: Jin, D. and Lin, S., Eds., Advances in Computer Science and Information Engineering, Springer, Berlin, 553-558. https://doi.org/10.1007/978-3-642-30223-7_87

15. 15. Evans, L.C. (2012) An Introduction to Stochastic Differential Equations. American Mathematical Society, Rhode Island. https://doi.org/10.1090/mbk/082