Here’s the journal of reading notes that include brief statement of paper, reading time and some summaries. The main line is about physics-informed neural network (PINN). If there is any mistake, thank you for your criticism by email pgi314@126.com.
2022.02.13 –
In this time, I’ll learn some basic knowledge. Starting form solving equations, I would focus on improve convergence of the trained neural network for approximating equations, including how to solve the problem of Loss Imbalance and Network Structure Design.
- Colby L. Wight, Jia Zhao. Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics Informed Neural Networks. arXiv:2007.04542, 2020
Adaptive Resampling, Time-Adaptive Approach, Allen-Cahn eq, Cahn-Hilliard eq - Rafael Bischof, Michael Kraus. Multi-Objective Loss Balancing for Physics-Informed Deep Learning. arXiv:2110.09813, 2021
a Review of some Loss Balancing methods which includes Learning Rate Annealing, GradNorm, SoftAdapt, also the authors proposed a new one called Relative Loss Balancing with Random Lookback (ReLoBRaLo). Burgers’ eq, Kirchhoff Plate Bending eq, Helmholtz eq - …
In most cases, some loss terms’ order of magnitude extremely lower than one (for example 1 0 − 12 10^{-12} 10−12 and 1 0 − 1 10^{-1} 10−1) may give you a signal that the loss function trapped into a local minimum, and we should check the learning rate or the structure of neural network. However, we could think of loss balancing tricks if the difference is small, for example, two loss terms’ order of magnitude (for example 1 0 − 4 10^{-4} 10−4 and 1 0 − 2 10^{-2} 10−2). Doing some analysis according to the gradient of different loss terms, then using a suitable balancing trick, we will see a better result.
Generally, residual term is much lower than initial and boundary terms. I find a phenomenon that, in the same case of loss imbalance, residual term much lower than initial and boundary terms (for example L r = 1 0 − 7 , L i = 1 0 − 2 , L b = 1 0 − 2 L_r = 10^{-7}, L_i = 10^{-2}, L_b = 10^{-2} Lr=10−7,Li=10−2,Lb=10−2) usually give a bad result, however, lower initial term and boundary term (for example L r = 1 0 − 2 , L i = 1 0 − 4 , L b = 1 0 − 4 L_r = 10^{-2}, L_i = 10^{-4}, L_b = 10^{-4} Lr=10−2,Li=10−4,Lb=10−4) will give a better one. As we know, the number of solutions for a differential equation is infinite, thus when initial and boundary terms are not low, the neural network will learn a wrong solution even though residual term is certainly low. It probably learn a solution with other initial or boundary conditions. So I always set more weights for initial and boundary terms to ensure that the neural network optimizes residual loss under a correct initial and boundary conditions.
今天的文章Notes for PINN分享到此就结束了,感谢您的阅读。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
如需转载请保留出处:https://bianchenghao.cn/9679.html