You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
in advance, thank you for your time. I am trying to implement PINN for solving Poisson's equation in 1D with Dirichlet boundary condition for my bachelor's thesis. When I used DeepXDE tutorial example from https://deepxde.readthedocs.io/en/latest/demos/pinn_forward/poisson.1d.dirichlet.html, the library gave a very accurate result, like this:
When I tried to implement PINNs on my own with loss as the MSE of the residual for the equation inside the domain (interval) and on the boundary, I didn't get as accurate results as DeepXDE. These are examples of my results:
I used the same parameters as in the demo in the documentation as much as I could: 3 layers of 50 tanh hidden layers and Adam optimizer (just like in the demo), same amount of points (I used tf.linspace to generate them) and same number of epochs.
I have looked at the DeepXDE documentation and saw some methods for improving accuracy such as gradient-enhanced PINN and residual-based adaptive sampling. After I implemented the g-PINN loss according to your article you have in DeepXDE documentation, the results improved, but were still not as accurate as your library. So I wanted to ask you, if you could say what methods are used in the DeepXDE library, that it gives such accurate results. Is gradient-enhancement or residual-based adaptive sampling used by default? Are there perhaps some drop out layers or some other layers put into the fully connected network to improve the accuracy? Are the weights for the different loss terms (gradient-enhancement, equation residual inside the domain and on the boundary) perhaps also adaptively changed during training? Does DeepXDE change the default parameters of the Adam optimizer used for training in the demo? Or does the library perhaps implement it's own Adam optimizer, which is different from the one in keras? For the residual of the equation and boundary in the loss, do you take MSE or do you calculate the loss differently than MSE of the residual? These were just some things that I don't have implemented and thougt could maybe improve the accuracy, but I am not sure.
If you wouldn't mind, I would greatly appreciate it, if you could tell me what accuracy improvements are implemented in DeepXDE, that it gives such accurate results consistently (training the network multiple times always gave a very accurate result).
Thank you in advance for your answer and thank you again for your time.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
in advance, thank you for your time. I am trying to implement PINN for solving Poisson's equation in 1D with Dirichlet boundary condition for my bachelor's thesis. When I used DeepXDE tutorial example from https://deepxde.readthedocs.io/en/latest/demos/pinn_forward/poisson.1d.dirichlet.html, the library gave a very accurate result, like this:
When I tried to implement PINNs on my own with loss as the MSE of the residual for the equation inside the domain (interval) and on the boundary, I didn't get as accurate results as DeepXDE. These are examples of my results:
I used the same parameters as in the demo in the documentation as much as I could: 3 layers of 50 tanh hidden layers and Adam optimizer (just like in the demo), same amount of points (I used tf.linspace to generate them) and same number of epochs.
I have looked at the DeepXDE documentation and saw some methods for improving accuracy such as gradient-enhanced PINN and residual-based adaptive sampling. After I implemented the g-PINN loss according to your article you have in DeepXDE documentation, the results improved, but were still not as accurate as your library. So I wanted to ask you, if you could say what methods are used in the DeepXDE library, that it gives such accurate results. Is gradient-enhancement or residual-based adaptive sampling used by default? Are there perhaps some drop out layers or some other layers put into the fully connected network to improve the accuracy? Are the weights for the different loss terms (gradient-enhancement, equation residual inside the domain and on the boundary) perhaps also adaptively changed during training? Does DeepXDE change the default parameters of the Adam optimizer used for training in the demo? Or does the library perhaps implement it's own Adam optimizer, which is different from the one in keras? For the residual of the equation and boundary in the loss, do you take MSE or do you calculate the loss differently than MSE of the residual? These were just some things that I don't have implemented and thougt could maybe improve the accuracy, but I am not sure.
If you wouldn't mind, I would greatly appreciate it, if you could tell me what accuracy improvements are implemented in DeepXDE, that it gives such accurate results consistently (training the network multiple times always gave a very accurate result).
Thank you in advance for your answer and thank you again for your time.
Beta Was this translation helpful? Give feedback.
All reactions