Forward Process Definition
To understand the forward diffusion process in diffusion models, you need to formalize how noise is gradually added to data in a controlled, stepwise manner. The process is defined as a Markov chain, where at each time step t, a small amount of Gaussian noise is added to the previous state. This stepwise corruption is governed by a conditional probability distribution, commonly denoted as q(xtββ£xtβ1β).
Mathematically, the forward process is defined as:
q(xtββ£xtβ1β)=N(xtβ;1βΞ²tββxtβ1β,Ξ²tβI)where:
- xtβ is the noisy sample at time step
t; - xtβ1β is the sample from the previous step;
- Ξ²tβ is the variance schedule (a small positive scalar controlling the noise at each step);
- I is the identity matrix.
This definition means that, given xtβ1β, the next state xtβ is sampled from a normal distribution centered at (1βΞ²tβ)βxtβ1β with variance Ξ²tβ. This Markov property ensures that each step only depends on the immediate previous state and not on the entire history.
The properties of q(xtββ£xtβ1β) are:
- It is a Gaussian distribution for every t;
- The process is Markovian: the future state depends only on the present state;
- The variance schedule Ξ²tβ determines the rate of noise addition;
- As t increases, the sample becomes progressively noisier, eventually approaching pure Gaussian noise as t approaches the maximum diffusion step.
You can also derive the marginal distribution of the forward process, which expresses the distribution of xtβ given the original, clean data sample x0β after t steps of noise addition. This is useful because it allows you to sample noisy data at any step directly from x0β without simulating the entire Markov chain step-by-step.
The marginal distribution is given by:
q(xtββ£x0β)=N(xtβ;Ξ±Λtββx0β,(1βΞ±Λtβ)I)where:
- Ξ±tβ=1βΞ²tβ;
- Ξ±Λtβ=βs=1tβΞ±sβ is the cumulative product of the noise schedule up to time t.
This form shows that, after t steps, the noisy sample xtβ is still Gaussian, with its mean scaled by Ξ±Λtββ and its variance increased to (1βΞ±Λtβ). This cumulative effect of the noise schedule makes it possible to sample xtβ in a single step from x0β:
xtβ=Ξ±Λtββx0β+1βΞ±ΛtββΞ΅,Ξ΅βΌN(0,I)This property is central to efficient training and sampling in diffusion models.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 8.33
Forward Process Definition
Swipe to show menu
To understand the forward diffusion process in diffusion models, you need to formalize how noise is gradually added to data in a controlled, stepwise manner. The process is defined as a Markov chain, where at each time step t, a small amount of Gaussian noise is added to the previous state. This stepwise corruption is governed by a conditional probability distribution, commonly denoted as q(xtββ£xtβ1β).
Mathematically, the forward process is defined as:
q(xtββ£xtβ1β)=N(xtβ;1βΞ²tββxtβ1β,Ξ²tβI)where:
- xtβ is the noisy sample at time step
t; - xtβ1β is the sample from the previous step;
- Ξ²tβ is the variance schedule (a small positive scalar controlling the noise at each step);
- I is the identity matrix.
This definition means that, given xtβ1β, the next state xtβ is sampled from a normal distribution centered at (1βΞ²tβ)βxtβ1β with variance Ξ²tβ. This Markov property ensures that each step only depends on the immediate previous state and not on the entire history.
The properties of q(xtββ£xtβ1β) are:
- It is a Gaussian distribution for every t;
- The process is Markovian: the future state depends only on the present state;
- The variance schedule Ξ²tβ determines the rate of noise addition;
- As t increases, the sample becomes progressively noisier, eventually approaching pure Gaussian noise as t approaches the maximum diffusion step.
You can also derive the marginal distribution of the forward process, which expresses the distribution of xtβ given the original, clean data sample x0β after t steps of noise addition. This is useful because it allows you to sample noisy data at any step directly from x0β without simulating the entire Markov chain step-by-step.
The marginal distribution is given by:
q(xtββ£x0β)=N(xtβ;Ξ±Λtββx0β,(1βΞ±Λtβ)I)where:
- Ξ±tβ=1βΞ²tβ;
- Ξ±Λtβ=βs=1tβΞ±sβ is the cumulative product of the noise schedule up to time t.
This form shows that, after t steps, the noisy sample xtβ is still Gaussian, with its mean scaled by Ξ±Λtββ and its variance increased to (1βΞ±Λtβ). This cumulative effect of the noise schedule makes it possible to sample xtβ in a single step from x0β:
xtβ=Ξ±Λtββx0β+1βΞ±ΛtββΞ΅,Ξ΅βΌN(0,I)This property is central to efficient training and sampling in diffusion models.
Thanks for your feedback!