Skip to content

Why isn't DiffAugment used as a layer in the discriminator? #101

@naddeoa

Description

@naddeoa

I'm playing around with gans and trying to get this toy pokemon generator making plausible pokemon. One of the tools I've picked up is DiffAugment. Based on the original paper and examples, it looks like its typically used as a function over the input to the discriminator.

It seems like an obvious convenience to me to include it directly in the discriminator's model as an early layer that respects the Trainable parameter (like this) but I can't find anyone actually using it that way. Is there something wrong with embedding it as a layer or is this just the result of everyone copy/pasting from the original examples?

I should also say that I'm confused about the back propagation component. I'm new to machine learning, but is this effectively happening automatically as a side effect of including it in the model or do I need to change the way that I apply gradients as well?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions