• Login
    View Item 
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    •   Home
    • Graduate Theses, Dissertations and Projects
    • Queen's Graduate Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Generalized Loss Functions for Generative Adversarial Networks

    Thumbnail
    View/Open
    Bhatia_Himesh_202010_MASC.pdf (27.77Mb)
    Author
    Bhatia, Himesh
    Metadata
    Show full item record
    Abstract
    This thesis investigates the use of parameterized families of information-theoretic measures to generalize the loss functions of generative adversarial networks (GANs) under the objective of improving performance. A new generator loss function, called least kth-order GAN (LkGAN), is introduced, generalizing the least squares GANs (LSGANs) by using a kth order absolute error distortion measure with k greater than or equal to 1 (which recovers the LSGAN loss function when k=2). It is shown that minimizing this generalized loss function under an (unconstrained) optimal discriminator is equivalent to minimizing the kth-order Pearson-Vajda divergence.

    A novel loss function for the original GANs using Renyi information measures with parameter alpha is next presented. The GAN's generator loss function is generalized in terms of Renyi cross-entropy functionals. For any alpha > 0, this generalized loss function is shown to preserve the equilibrium point satisfied by the original GAN loss based on the Jensen-Renyi divergence, a natural extension of the Jensen-Shannon divergence. It is also proved that the Renyi-centric loss function reduces to the original GANs loss function as alpha approaches 1.

    Experimental results implemented on the MNIST and CelebA datasets under both DCGANs and StyleGANs architectures, indicate that the proposed LkGAN and RenyiGAN systems confer performance benefits by virtue of the extra degrees of freedom provided by the parameters k and alpha, respectively. More specifically, experiments show improvements with regard to the quality of the generated images as measured by the Frechet Inception Distance (FID) score and demonstrated by training stability and extensive simulations.
    URI for this record
    http://hdl.handle.net/1974/28233
    Collections
    • Department of Mathematics and Statistics Graduate Theses
    • Queen's Graduate Theses and Dissertations
    Request an alternative format
    If you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV
     

     

    Browse

    All of QSpaceCommunities & CollectionsPublished DatesAuthorsTitlesSubjectsTypesThis CollectionPublished DatesAuthorsTitlesSubjectsTypes

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    Atmire NV