LOTUS Theorem in original GANs formulation.

How do we apply the Law Of The Unconcious Statistician to Ian Goodfellow’s original GANs value function formulation?


Author

Affiliation

Andrea Bonvini

 

Published

Dec. 10, 2020

Citation

Bonvini, 2020


In this brief we want to prove a passage of the original GANs Paper by Ian Goodfellow. Specifically, we want to prove that the following equation is satisfied:

zpz(z)log(1D(G(z)))dz=xpg(x)log(1D(x))dx

For a continuous random variable z, let x=G(z) and z=G1(x) suppose that G is differentiable and that its inverse G1 is monotonic. By the formula for inverse functions and differentiation we have that dzdxdxdz=1
dzdx=1dxdz
dzdx=1dG(G1(x))d(G1(x))
dzdx=1G(G1(x))
dz=1G(G1(x))dx
so that by a change of variables we can rewrite everything in function of x, pz(z)log(1D(G(z)))dz=
pz(z)log(1D(x))1G(G1(x))dx=
pz(z)log(1D(x))1G(G1(x))dx=
pz(z)1G(G1(x))log(1D(x))dx=
Now, notice that the cumulative distribution function PX(x):Rn[0,1] is PX(x)=Pr(X<x)=Pr(X1x1,X2x2,,Xnxn), we observe that:

PX(x)=

Pr(X<x)=
Pr(G(Z)x)=
Pr(ZG1(x))=
PZ(G1(x))
From here PX(x)=PZ(G1(x))PX(x)=PZ(z)
We take the derivative w.r.t x: dPX(x)dx=PZ(z)dx
dPX(x)dx=PZ(z)dzdzdx
pX(x)=pz(z)1G(G1(x))
For clarity we rename pX(x) as pg(x) since it represents the distribution learned from our generator G, we have pg(x)=pz(z)1G(G1(x))

Footnotes

    Citation

    For attribution, please cite this work as

    Bonvini (2020, Dec. 11). Last Week's Potatoes: LOTUS Theorem in original GANs formulation.. Retrieved from https://lastweekspotatoes.com/posts/2021-07-22-lotus-theorem-in-original-gans-formulation/

    BibTeX citation

    @misc{bonvini2020lotus,
      author = {Bonvini, Andrea},
      title = {Last Week's Potatoes: LOTUS Theorem in original GANs formulation.},
      url = {https://lastweekspotatoes.com/posts/2021-07-22-lotus-theorem-in-original-gans-formulation/},
      year = {2020}
    }