Encoding Invariances in Deep Generative Models

Thumbnail Image
Date
2019-06-04
Authors
Joshi, Ameya
Pokuri, Balaji
Sarkar, Soumik
Ganapathysubramanian, Baskar
Hegde, Chinmay
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Organizational Unit
Organizational Unit
Journal Issue
Is Version Of
Versions
Series
Department
Mechanical EngineeringElectrical and Computer EngineeringPlant Sciences Institute
Abstract

Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions. However, in several applications, training samples obey invariances that are \textit{a priori} known; for example, in complex physics simulations, the training data obey universal laws encoded as well-defined mathematical equations. In this paper, we propose a new generative modeling approach, InvNet, that can efficiently model data spaces with known invariances. We devise an adversarial training algorithm to encode them into data distribution. We validate our framework in three experimental settings: generating images with fixed motifs; solving nonlinear partial differential equations (PDEs); and reconstructing two-phase microstructures with desired statistical properties. We complement our experiments with several theoretical results.

Comments

This is a pre-print of the article Shah, Viraj, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, and Chinmay Hegde. "Encoding Invariances in Deep Generative Models." arXiv preprint arXiv:1906.01626 (2019). Posted with permission.

Description
Keywords
Citation
DOI
Source
Copyright
Tue Jan 01 00:00:00 UTC 2019
Collections