1

I found the usage of both objective function and value function in the same context.

Context #1: In the paper titled Generative Adversarial Nets by Ian J. Goodfellow et al.

We simultaneously train G to minimize $\log(1 −D(G(z)))$. In other words, $D$ and $G$ play the following two-player minimax game with value function $V (G,D)$:

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$

Context #2: In the paper titled Conditional Generative Adversarial Nets by Mehdi Mirza et al.

The objective function of a two-player minimax game would be as

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x|y)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z|y)))]$$

In fact, the second paper also iterated context #1 i.e., used the term "value function" at another place.

We can observe that objective function is a function which we want to optimize

The objective function is the most general term that can be used to refer to a cost (or loss) function, to a utility function, or to a fitness function, so, depending on the problem, you either want to minimize or maximize the objective function. The term objective is a synonym for goal.

Since the generator or discriminator has to perform optimization, it is agreeable to use the term objective function in this context.

But what is the definition for the value function and how is it different from the objective function in this context?

hanugm
  • 4,102
  • 3
  • 29
  • 63

1 Answers1

1

The value function may be used in the GAN paper because GANs are inspired by game theory, where terms like utility, utility function and value function (just like in reinforcement learning) are used (the first two for sure, but I am not sure about the usage of the term value function in game theory, as I am far from an expert in game theory). If you want to know more about the usage of the term value function, this Wikipedia article could be useful (or maybe make things more confusing).

Having said that, it seems to me that the usage of the term "objective function" in the conditional GAN paper is a bit sloppy. They probably meant the optimization problem.

However, it's also true that the notation used by the original authors of the GAN can also be confusing. They wrote

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))] \label{1}\tag{1}$$

Here, $V(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$, so they could have written \ref{1} as follows

$$\min_G \max_D V(D, G) = \min_G \max_D\mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))] \label{2}\tag{2}$$

or just

$$\min_G \max_D V(D, G) \label{3}\tag{3}$$

Then clarify that $V(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$.

This is clarified in this paper (equations 2.1 and 2.2., page 5).

So, in the GAN, we're optimizing $V$, so $V$ is the objective function; thus, in this case, the term "value function" is a synonym for "objective function". In this case, the optimization problem is a $\color{blue}{\textrm{min}}$$\color{red}{\textrm{max}}$ game, i.e. we $\color{red}{\textrm{maximize}}$ and $\color{blue}{\textrm{minimize}}$ at the same time two terms of the objective function, i.e. $\color{red}{\mathbb{E}_{x ∼ P_{data}}[\log D(x)]}$ and $\color{blue}{\mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]}$ (this is explained in the GAN paper!). In practice, they optimize two slightly different objectives, but they are equivalent. See algorithm 1 in the GAN paper.

So, as I said in my other answer, the objective function is the function that you want to optimize (i.e. minimize or maximize), so it's usually a synonym for loss/cost/error function (in case you want to minimize it) and can be a synonym for value function (in case you want to maximize it, for example, in reinforcement learning), as it seems to be the case in the GAN (although, in the GAN, you maximize and minimize the value function).

nbro
  • 42,615
  • 12
  • 119
  • 217