- Take GAN as one network.
Training of a simple neural network is easy to understand in a straight line: “The calculation of the network is progressing, the difference between the output and the correct answer is calculated, and the weight is updated by the error backpropagation…” However, GAN seems to have two networks, and there are two types of loss, so I was wondering how to calculate training.
As I read the code,
By thinking of GAN as a “group of networks,” I began to think that it would be easier to understand.
It is a one-stop network: input → generator → discriminator → output.
- The true output of GAN
What do you think is the output of GAN as one network?
The output is just one number.
For example, “0.2”.
Based on this simple number, we will continue training.
In one step of training,
Output of real image input to GAN (real output)
Output of fake noise input to GAN (fake output)
These are the simple numbers I said at the beginning.
Real output “0.6”
Fake output “0.3”
It’s not like there are as many numbers as there are pixels. It’s really just one number.
- correct output
The correct answer for this output is
Real (real) is “1”
Fake is “0”
Compared to the previous example, it can be said that the result “0.6” of inputting the real thing is 0.4 (simply subtracted) from the correct answer “1” when inputting the real thing.
Also, it can be said that the result “0.3” of the fake has a difference of 0.3 (simply subtracted) from the correct answer “0” when inputting the fake.
- Trained based on the difference between the output and the correct answer
These differences are called “losses”.
Difference between real output and “1”
Difference between fake output and “0”
In short, the discriminator loss is how much there is a difference from the correct answer such that the real thing is genuine and the fake one is fake.
Generator loss is the difference between fake output and “1”
In short, the generator loss is how much there is an error with the “genuine (1)” when inserting a fake.
Based on these losses, the network weight is adjusted by the error backpropagation method.
Only the weight of the discriminator in the network is adjusted based on the discriminator loss.
Only generator weights in the network are adjusted based on generator loss,
The error backpropagation method adjusts the weight of the network little by little so that the difference in loss is as small as possible.
The discriminator is backpropagated to minimize the error from the correct answer,
The generator is backpropagated to bring the fake output closer to the real (1).
As a result, the GAN structure is often heard: “The discriminator is trained to discriminate between genuine and fake, and the generator is trained to mock the discriminator using the fake as the real thing.”
Then, the trained generator will generate a real-like image, but in the training process, the image generated by the generator is only an intermediate representation of the entire GAN network.
Please follow my Twitter. https://twitter.com/JackdeS11 And please clap your hands 👏.