Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprende Probability Flow ODEs | Advanced Diffusion Formulations
Quizzes & Challenges
Quizzes
Challenges
/
Diffusion Models and Generative Foundations

bookProbability Flow ODEs

Probability flow ODEs offer a powerful and elegant way to describe the generative process in diffusion models. Instead of treating the reverse process as a stochastic differential equation (SDE) that samples data by reversing the corruption of noise, the probability flow ODE reformulates this process into a deterministic ordinary differential equation (ODE). This means you can map pure noise to data points without the randomness of sampling at every step, enabling exact likelihood computation and more controlled generation.

To understand how probability flow ODEs arise, recall that in the SDE formulation of diffusion models, you have a forward process that gradually adds noise to the data, and a reverse SDE that removes this noise. The reverse SDE typically takes the form:

dx=[f(x,t)g(t)2xlogpt(x)]dt+g(t)dwˉdx = [f(x, t) - g(t)^2 \nabla_x \log p_t(x)] dt + g(t) d\bar{w}

where f(x,t)f(x, t) and g(t)g(t) are drift and diffusion coefficients, and dwˉd\bar{w} is a Wiener process.

The key insight is that you can construct an ODE that shares the same marginal distributions as the SDE at every time tt. By removing the stochastic term and adjusting the drift, you get the probability flow ODE:

dx=[f(x,t)12g(t)2xlogpt(x)]dtdx = [f(x, t) - \frac{1}{2} g(t)^2 \nabla_x \log p_t(x)] dt

This ODE deterministically transports noise to data, following the probability flow of the underlying SDE. The term xlogpt(x)\nabla_x \log p_t(x) is known as the score function, typically learned by the model. By integrating this ODE from pure noise at t=1t=1 to data at t=0t=0, you can generate samples without random noise at each step.

question mark

What is a key characteristic of the probability flow ODE in diffusion models?

Select the correct answer

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 3. Capítulo 2

Pregunte a AI

expand

Pregunte a AI

ChatGPT

Pregunte lo que quiera o pruebe una de las preguntas sugeridas para comenzar nuestra charla

Suggested prompts:

Can you explain the intuition behind why the probability flow ODE matches the SDE's marginals?

How is the score function $$\nabla_x \log p_t(x)$$ estimated in practice?

What are the practical advantages of using the probability flow ODE over the reverse SDE?

Awesome!

Completion rate improved to 8.33

bookProbability Flow ODEs

Desliza para mostrar el menú

Probability flow ODEs offer a powerful and elegant way to describe the generative process in diffusion models. Instead of treating the reverse process as a stochastic differential equation (SDE) that samples data by reversing the corruption of noise, the probability flow ODE reformulates this process into a deterministic ordinary differential equation (ODE). This means you can map pure noise to data points without the randomness of sampling at every step, enabling exact likelihood computation and more controlled generation.

To understand how probability flow ODEs arise, recall that in the SDE formulation of diffusion models, you have a forward process that gradually adds noise to the data, and a reverse SDE that removes this noise. The reverse SDE typically takes the form:

dx=[f(x,t)g(t)2xlogpt(x)]dt+g(t)dwˉdx = [f(x, t) - g(t)^2 \nabla_x \log p_t(x)] dt + g(t) d\bar{w}

where f(x,t)f(x, t) and g(t)g(t) are drift and diffusion coefficients, and dwˉd\bar{w} is a Wiener process.

The key insight is that you can construct an ODE that shares the same marginal distributions as the SDE at every time tt. By removing the stochastic term and adjusting the drift, you get the probability flow ODE:

dx=[f(x,t)12g(t)2xlogpt(x)]dtdx = [f(x, t) - \frac{1}{2} g(t)^2 \nabla_x \log p_t(x)] dt

This ODE deterministically transports noise to data, following the probability flow of the underlying SDE. The term xlogpt(x)\nabla_x \log p_t(x) is known as the score function, typically learned by the model. By integrating this ODE from pure noise at t=1t=1 to data at t=0t=0, you can generate samples without random noise at each step.

question mark

What is a key characteristic of the probability flow ODE in diffusion models?

Select the correct answer

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 3. Capítulo 2
some-alt