Abstract

As information exists in various modalities in real world, effective interaction and fusion among multimodal information plays a key role for the creation and perception of multimodal data in computer vision and deep learning research. With superb power in modeling the interaction among multimodal information, multimodal image synthesis and editing has become a hot research topic in recent years. Instead of providing explicit guidance for network training, multimodal guidance offers intuitive and flexible means for image synthesis and editing. On the other hand, this field is also facing several challenges in alignment of multimodal features, synthesis of high-resolution images, faithful evaluation metrics, etc. In this survey, we comprehensively contextualize the advance of the recent multimodal image synthesis and editing and formulate taxonomies according to data modalities and model types. We start with an introduction to different guidance modalities in image synthesis and editing, and then describe multimodal image synthesis and editing approaches extensively according to their model types. After that, we describe benchmark datasets and evaluation metrics as well as corresponding experimental results. Finally, we provide insights about the current research challenges and possible directions for future research.

overview

Gauge Transformations

In normal usage, a gauge defines a measuring system, e.g., pressure gauge and temperature gauge. Under the context of neural fields, a measuring system (i.e. gauge) is a specification of parameters to index a neural field,e.g., 3D Cartesian coordinate system, triplane in EG3D, hash table in Instant-NGP. The transformation between different measuring systems is dubbed as Gauge Transformation.
The gauge transformation could be a pre-defined function. Intuitively, compared with a pre-defined function, a learnable gauge transformation is more favorable as it can be optimized towards the final use case of the neural field and possibly yields better performance. For learning neural gauge fields, we disambiguate between two cases: continuous (e.g., 2D plane and sphere surface) and discrete (e.g., hash table space) mappings.

Continuous Gauge Transformations

overview

Discrete Gauge Transformations

overview


Regularization for Gauge Transformation

we found the learning of gauge transformations is easily stuck in local minimal. Specifically, the gauge transformation tends to collapse to a small region in continuous cases or collapse to a small number of indices in discrete cases, highlighting the need for an additional regularization to enable an effective learning of continuous and discrete gauge transformations

overview


Results of Gauge Transformation

overview


Application in Texture Editing

overview



Citation

@inproceedings{zhan2023general,
     title = {General Neural Gauge Fields},
     author = {Zhan, Fangneng and Liu, Lingjie and Kortylewski, Adam and Theobalt, Christian},
     booktitle = {International Conference on Learning Representations},
     year = {2023}
}