Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprende Conjugate Priors in Gaussian Models | Gaussian Distribution
Probability Distributions for Machine Learning

bookConjugate Priors in Gaussian Models

Conjugate priors are a central idea in Bayesian statistics, especially important in machine learning where you often update your beliefs about model parameters as new data arrives. In the context of Gaussian models, a conjugate prior means that if you start with a Gaussian (normal) distribution as your prior for some unknown mean, and you observe data that is also modeled with a Gaussian likelihood, the resulting posterior distribution for the mean will also be Gaussian. This property makes Bayesian updating mathematically convenient and computationally efficient.

Suppose you are estimating the mean of a process, and you initially believe the mean has a Gaussian distribution with some prior mean and variance. When you observe new data points, which are also assumed to be drawn from a Gaussian distribution (with known variance), you can update your belief. The updated belief, or posterior, is again a Gaussian distribution, but with a new mean and variance that combine information from both your prior and the observed data. The formulas for the updated mean and variance reflect a weighted average: the more certain (lower variance) your prior or your data, the more influence it has on the posterior.

123456789101112131415161718192021222324252627282930313233
import numpy as np import matplotlib.pyplot as plt # Prior: N(mu0, sigma0^2) mu0 = 0.0 sigma0 = 1.0 # Likelihood: N(x_bar, sigma^2/n) # Assume n observations with sample mean x_bar and known data variance sigma^2 x_bar = 2.0 n = 5 sigma = 1.0 # Posterior parameters (Gaussian-Gaussian conjugacy) posterior_variance = 1 / (1/sigma0**2 + n/sigma**2) posterior_mean = posterior_variance * (mu0/sigma0**2 + n*x_bar/sigma**2) # Plotting x = np.linspace(-2, 4, 400) prior_pdf = (1/np.sqrt(2*np.pi*sigma0**2)) * np.exp(-0.5*((x-mu0)/sigma0)**2) likelihood_pdf = (1/np.sqrt(2*np.pi*(sigma**2/n))) * np.exp(-0.5*((x-x_bar)/np.sqrt(sigma**2/n))**2) posterior_pdf = (1/np.sqrt(2*np.pi*posterior_variance)) * np.exp(-0.5*((x-posterior_mean)/np.sqrt(posterior_variance))**2) plt.figure(figsize=(8, 5)) plt.plot(x, prior_pdf, label='Prior (mean=%.2f, var=%.2f)' % (mu0, sigma0**2), color='blue') plt.plot(x, likelihood_pdf, label='Likelihood (mean=%.2f, var=%.2f)' % (x_bar, sigma**2/n), color='green') plt.plot(x, posterior_pdf, label='Posterior (mean=%.2f, var=%.2f)' % (posterior_mean, posterior_variance), color='red') plt.title('Gaussian Prior, Likelihood, and Posterior') plt.xlabel('Mean value') plt.ylabel('Density') plt.legend() plt.grid(True) plt.show()
copy

The plot you see illustrates how a Gaussian prior and a Gaussian likelihood combine to form a new Gaussian posterior. The prior curve (blue) represents your initial belief about the mean. The likelihood curve (green) shows the information contributed by your observed data. The posterior curve (red) is the result of updating your belief: its mean is shifted toward the observed data, and its variance is reduced, reflecting increased certainty. In machine learning, this process of parameter updating is fundamental to Bayesian inference, allowing you to systematically refine your model as new data becomes available while always keeping track of uncertainty.

question mark

Which statement best describes the role of conjugate priors in Gaussian models and the Bayesian updating process?

Select the correct answer

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 2. Capítulo 4

Pregunte a AI

expand

Pregunte a AI

ChatGPT

Pregunte lo que quiera o pruebe una de las preguntas sugeridas para comenzar nuestra charla

bookConjugate Priors in Gaussian Models

Desliza para mostrar el menú

Conjugate priors are a central idea in Bayesian statistics, especially important in machine learning where you often update your beliefs about model parameters as new data arrives. In the context of Gaussian models, a conjugate prior means that if you start with a Gaussian (normal) distribution as your prior for some unknown mean, and you observe data that is also modeled with a Gaussian likelihood, the resulting posterior distribution for the mean will also be Gaussian. This property makes Bayesian updating mathematically convenient and computationally efficient.

Suppose you are estimating the mean of a process, and you initially believe the mean has a Gaussian distribution with some prior mean and variance. When you observe new data points, which are also assumed to be drawn from a Gaussian distribution (with known variance), you can update your belief. The updated belief, or posterior, is again a Gaussian distribution, but with a new mean and variance that combine information from both your prior and the observed data. The formulas for the updated mean and variance reflect a weighted average: the more certain (lower variance) your prior or your data, the more influence it has on the posterior.

123456789101112131415161718192021222324252627282930313233
import numpy as np import matplotlib.pyplot as plt # Prior: N(mu0, sigma0^2) mu0 = 0.0 sigma0 = 1.0 # Likelihood: N(x_bar, sigma^2/n) # Assume n observations with sample mean x_bar and known data variance sigma^2 x_bar = 2.0 n = 5 sigma = 1.0 # Posterior parameters (Gaussian-Gaussian conjugacy) posterior_variance = 1 / (1/sigma0**2 + n/sigma**2) posterior_mean = posterior_variance * (mu0/sigma0**2 + n*x_bar/sigma**2) # Plotting x = np.linspace(-2, 4, 400) prior_pdf = (1/np.sqrt(2*np.pi*sigma0**2)) * np.exp(-0.5*((x-mu0)/sigma0)**2) likelihood_pdf = (1/np.sqrt(2*np.pi*(sigma**2/n))) * np.exp(-0.5*((x-x_bar)/np.sqrt(sigma**2/n))**2) posterior_pdf = (1/np.sqrt(2*np.pi*posterior_variance)) * np.exp(-0.5*((x-posterior_mean)/np.sqrt(posterior_variance))**2) plt.figure(figsize=(8, 5)) plt.plot(x, prior_pdf, label='Prior (mean=%.2f, var=%.2f)' % (mu0, sigma0**2), color='blue') plt.plot(x, likelihood_pdf, label='Likelihood (mean=%.2f, var=%.2f)' % (x_bar, sigma**2/n), color='green') plt.plot(x, posterior_pdf, label='Posterior (mean=%.2f, var=%.2f)' % (posterior_mean, posterior_variance), color='red') plt.title('Gaussian Prior, Likelihood, and Posterior') plt.xlabel('Mean value') plt.ylabel('Density') plt.legend() plt.grid(True) plt.show()
copy

The plot you see illustrates how a Gaussian prior and a Gaussian likelihood combine to form a new Gaussian posterior. The prior curve (blue) represents your initial belief about the mean. The likelihood curve (green) shows the information contributed by your observed data. The posterior curve (red) is the result of updating your belief: its mean is shifted toward the observed data, and its variance is reduced, reflecting increased certainty. In machine learning, this process of parameter updating is fundamental to Bayesian inference, allowing you to systematically refine your model as new data becomes available while always keeping track of uncertainty.

question mark

Which statement best describes the role of conjugate priors in Gaussian models and the Bayesian updating process?

Select the correct answer

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 2. Capítulo 4
some-alt