Search:
Match:
1 results

Convergence of Deep Gradient Flow Methods for PDEs

Published:Dec 31, 2025 18:11
1 min read
ArXiv

Analysis

This paper provides a theoretical foundation for using Deep Gradient Flow Methods (DGFMs) to solve Partial Differential Equations (PDEs). It breaks down the generalization error into approximation and training errors, demonstrating that under certain conditions, the error converges to zero as network size and training time increase. This is significant because it offers a mathematical guarantee for the effectiveness of DGFMs in solving complex PDEs, particularly in high dimensions.
Reference

The paper shows that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity.