Generalized Loss Functions for Generative Adversarial Networks

dc.contributor.authorBhatia, Himeshen
dc.contributor.departmentMathematics and Statisticsen
dc.contributor.supervisorAlajaji, Fady
dc.contributor.supervisorGharesifard, Bahman
dc.date.accessioned2020-10-27T20:59:05Z
dc.date.available2020-10-27T20:59:05Z
dc.degree.grantorQueen's University at Kingstonen
dc.description.abstractThis thesis investigates the use of parameterized families of information-theoretic measures to generalize the loss functions of generative adversarial networks (GANs) under the objective of improving performance. A new generator loss function, called least kth-order GAN (LkGAN), is introduced, generalizing the least squares GANs (LSGANs) by using a kth order absolute error distortion measure with k greater than or equal to 1 (which recovers the LSGAN loss function when k=2). It is shown that minimizing this generalized loss function under an (unconstrained) optimal discriminator is equivalent to minimizing the kth-order Pearson-Vajda divergence. A novel loss function for the original GANs using Renyi information measures with parameter alpha is next presented. The GAN's generator loss function is generalized in terms of Renyi cross-entropy functionals. For any alpha > 0, this generalized loss function is shown to preserve the equilibrium point satisfied by the original GAN loss based on the Jensen-Renyi divergence, a natural extension of the Jensen-Shannon divergence. It is also proved that the Renyi-centric loss function reduces to the original GANs loss function as alpha approaches 1. Experimental results implemented on the MNIST and CelebA datasets under both DCGANs and StyleGANs architectures, indicate that the proposed LkGAN and RenyiGAN systems confer performance benefits by virtue of the extra degrees of freedom provided by the parameters k and alpha, respectively. More specifically, experiments show improvements with regard to the quality of the generated images as measured by the Frechet Inception Distance (FID) score and demonstrated by training stability and extensive simulations.en
dc.description.degreeM.A.Sc.en
dc.identifier.urihttp://hdl.handle.net/1974/28233
dc.language.isoengen
dc.relation.ispartofseriesCanadian thesesen
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canadaen
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreementen
dc.rightsIntellectual Property Guidelines at Queen's Universityen
dc.rightsCopying and Preserving Your Thesisen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.subjectUnsupervised learningen
dc.subjectGenerative modelsen
dc.subjectMachine learningen
dc.subjectArtificial intelligenceen
dc.subjectOptimizationen
dc.subjectInformation theoryen
dc.titleGeneralized Loss Functions for Generative Adversarial Networksen
dc.typethesisen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bhatia_Himesh_202010_MASC.pdf
Size:
27.77 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.6 KB
Format:
Item-specific license agreed upon to submission
Description: