Two approaches for improving scalability and generalizability of PDE solvers: space-time formulations and neural solvers

dc.contributor.advisor Ganapathysubramanian, Baskar
dc.contributor.advisor Hsu, Ming-Chen
dc.contributor.advisor Sarkar, Soumik
dc.contributor.advisor Rossmanith, James
dc.contributor.advisor Sharma, Anupam
dc.contributor.author Khara, Biswajit
dc.contributor.department Mechanical Engineering
dc.date.accessioned 2023-06-20T22:17:04Z
dc.date.available 2023-06-20T22:17:04Z
dc.date.issued 2023-05
dc.date.updated 2023-06-20T22:17:04Z
dc.description.abstract The past hundred years have seen the development of a large number of numerical methods for solving partial differential equations (PDEs). And yet, developing fast, scalable and general methods for this purpose remains a challenging task. Granted, that mathematical methods coupled with the advancement of computing technologies have enabled practitioners to solve non-trivial problems in analysis and design of scientific and engineering applications. But solving practical problems in near-real time, and at sufficient resolution is still a dream. There are several reasons behind this. First, traditional algorithms attempt to mimic the physical system as much as possible, e.g., most evolution problems are traditionally simulated using algorithms that respect the sequential nature of time. But multi-scale and nonlinear behavior in time can limit the algorithm to very small time-steps and prevent the simulation from being scalable. Prior works have shown that the time dimension can also be decomposed and parallelized so as to break out of this sequentiality. Second, most implicit techniques in the traditional methods rely on solving linear system and this solve-step typically determines the lower bound of the computation time. But the recent developments in the study of neural netwok approximations have revealed that such solutions can be obtained using deep neural networks. Such methods can ''learn'' the solution to the PDE using optimization techniques. And this is promising because if the solution to a PDE can be modeled using a neural network, then the computation steps simply consist of a sequence of sparse / dense matrix vector multiplications, which can be much cheaper than solving linear systems. In this work, we take a detailed look into these two issues. We formulate time-dependent parabolic PDEs in coupled space-time using a continuous Galerkin finite element method. But doing so brings its own set of challenges: (i) the discrete system can become badly conditioned and (ii) the computational complexity increases. The conditioning of the discrete system can be solved by introducing stabilized methods in the space-time formulation. We show that such stabilization leads to stable and accurate solutions. We also derive a posteriori error estimates to utilize adaptive refinement in space-time. We demonstrate that for those cases where the solution exhibits highly localized behavior in both space and time, they can be modeled very well using space-time methods coupled with adaptive refinement. With respect to the neural approximations for PDEs, we develop methods that can ''learn'' the solution to families of PDEs instead of a single instance of the PDE. While existing methods model the solution to the PDE as a neural network, we formulate a more complex mapping between the input quantities and the solution. We present two such mappings: (i) a mapping between the material property to the solution and (ii) a mapping between the geometry to the solution. The neural method then ''learns'' that mapping using the PDE as an invariance. There are multiple ways of designing this invariance. We use both a residual based invariance as well as an energy based invariance. Once such a model is ''trained'', it can be used to obtain the solution to the PDE for any material property (or geometry) taken from the training distribution. Such neural solvers typically need to be ''trained'' using optimization techniques. This training is generally done before real-time usage of the model, and is known as ''off-line training''. But this training process can be expensive / time-consuming nonetheless. This is because, the modern numerical optimization step requires a large number of auxiliary data-structures for automatic computation of gradients. Therefore, carrying out such large computations on high resolution datasets become challenging. We overcome this by applying ideas from ''transfer learning'' and multigrid methods. We train the neural model incrementally; starting with coarse models and then gradually solving on a sequence of finer meshes until we reach the target resolution.
dc.format.mimetype PDF
dc.identifier.orcid 0000-0002-5964-8739
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/7wbO34Pv
dc.language.iso en
dc.language.rfc3066 en
dc.subject.disciplines Applied mathematics en_US
dc.subject.disciplines Fluid mechanics en_US
dc.subject.disciplines Artificial intelligence en_US
dc.subject.keywords finite element analysis en_US
dc.subject.keywords neural pde solvers en_US
dc.subject.keywords neural ritz method en_US
dc.subject.keywords physics informed machine learning en_US
dc.subject.keywords scientific computation en_US
dc.subject.keywords space-time methods en_US
dc.title Two approaches for improving scalability and generalizability of PDE solvers: space-time formulations and neural solvers
dc.type dissertation en_US
dc.type.genre dissertation en_US
dspace.entity.type Publication
relation.isOrgUnitOfPublication 6d38ab0f-8cc2-4ad3-90b1-67a60c5a6f59
thesis.degree.discipline Applied mathematics en_US
thesis.degree.discipline Fluid mechanics en_US
thesis.degree.discipline Artificial intelligence en_US
thesis.degree.grantor Iowa State University en_US
thesis.degree.level dissertation $
thesis.degree.name Doctor of Philosophy en_US
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Khara_iastate_0097E_20577.pdf
Size:
6.31 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: