Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Bellman Equations | Dynamic Programming
Introduction to Reinforcement Learning
course content

Course Content

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning

1. RL Core Theory
2. Multi-Armed Bandit Problem
3. Dynamic Programming
4. Monte Carlo Methods
5. Temporal Difference Learning

book
Bellman Equations

To clarify the definition:

  • A functional equation is an equation whose solution is a function. For Bellman equation, this solution is the value function for which the equation was formulated;
  • A recursive form means that the value at the current state is expressed in terms of values at future states.

In short, solving the Bellman equation gives the desired value function, and deriving this equation requires identifying a recursive relationship between current and future states.

State Value Function

As a reminder, here is a state value function in compact form:

vΟ€(s)=E⁑π[Gt∣St=s]\def\E{\operatorname{\mathbb{E}}} v_\pi(s) = \E_\pi[G_t | S_t = s]

To obtain the Bellman equation for this value function, let's expand the right side of the equation and establish a recursive relationship:

vΟ€(s)=E⁑π[Gt∣St=s]=E⁑π[Rt+1+Ξ³Rt+2+Ξ³2Rt+3+...∣St=s]=E⁑π[Rt+1+Ξ³βˆ‘k=0∞γkRt+k+2∣St=s]=E⁑π[Rt+1+Ξ³Gt+1∣St=s]=βˆ‘aΟ€(a∣s)βˆ‘sβ€²,rp(sβ€²,r∣s,a)(r+Ξ³E⁑π[Gt+1∣St+1=sβ€²])=βˆ‘aΟ€(a∣s)βˆ‘sβ€²,rp(sβ€²,r∣s,a)(r+Ξ³vΟ€(sβ€²))\def\E{\operatorname{\mathbb{E}}} \begin{aligned} v_\pi(s) &= \E_\pi[G_t | S_t = s]\\ &= \E_\pi[R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3} + ... | S_t = s]\\ &= \E_\pi[R_{t+1} + \gamma \sum_{k=0}^\infty \gamma^k R_{t+k+2} | S_t = s]\\ &= \E_\pi[R_{t+1} + \gamma G_{t+1} | S_t = s]\\ &= \sum_a \pi(a | s) \sum_{s', r} p(s', r | s, a)\Bigl(r + \gamma \E_\pi\Bigl[G_{t+1} | S_{t+1} = s'\Bigr]\Bigr)\\ &= \sum_a \pi(a | s) \sum_{s', r} p(s', r | s, a)\Bigl(r + \gamma v_\pi(s')\Bigr) \end{aligned}

The last equation in this chain is a Bellman equation for state value function.

Intuition

To find the value of a state ss, you:

  1. Consider all possible actions aa you might take from this state, each weighted by how likely you are to choose that action under your current policy Ο€(a∣s)\pi(a | s);
  2. For each action aa, you consider all possible next states sβ€²s' and rewards rr, weighted by their likelihood p(sβ€²,r∣s,a)p(s', r | s, a);
  3. For each of these outcomes, you take the immediate reward rr you get plus the discounted value of the next state Ξ³vΟ€(sβ€²)\gamma v_\pi(s').

By summing all these possibilities together, you get the total expected value of the state ss under your current policy.

Action Value Function

Here is an action value function in compact form:

qΟ€(s,a)=E⁑π[Gt∣St=s,At=a]\def\E{\operatorname{\mathbb{E}}} q_\pi(s, a) = \E_\pi[G_t | S_t = s, A_t = a]

The derivation of Bellman equation for this function is quite similar to the previous one:

qΟ€(s,a)=E⁑π[Gt∣St=s,At=a]=E⁑π[Rt+1+Ξ³Rt+2+Ξ³2Rt+3+...∣St=s,At=a]=E⁑π[Rt+1+Ξ³βˆ‘k=0∞γkRt+k+2∣St=s,At=a]=E⁑π[Rt+1+Ξ³Gt+1∣St=s,At=a]=βˆ‘sβ€²,rp(sβ€²,r∣s,a)(r+Ξ³E⁑π[Gt+1∣St+1=sβ€²])=βˆ‘sβ€²,rp(sβ€²,r∣s,a)(r+Ξ³βˆ‘aβ€²Ο€(aβ€²βˆ£sβ€²)(E⁑π[Gt+1∣St+1=sβ€²,At+1=aβ€²]))=βˆ‘sβ€²,rp(sβ€²,r∣s,a)(r+Ξ³βˆ‘aβ€²Ο€(aβ€²βˆ£sβ€²)q(sβ€²,aβ€²))\def\E{\operatorname{\mathbb{E}}} \begin{aligned} q_\pi(s, a) &= \E_\pi[G_t | S_t = s, A_t = a]\\ &= \E_\pi[R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3} + ... | S_t = s, A_t = a]\\ &= \E_\pi[R_{t+1} + \gamma \sum_{k=0}^\infty \gamma^k R_{t+k+2} | S_t = s, A_t = a]\\ &= \E_\pi[R_{t+1} + \gamma G_{t+1} | S_t = s, A_t = a]\\ &= \sum_{s', r} p(s', r | s, a)\Bigl(r + \gamma \E_\pi\Bigl[G_{t+1} | S_{t+1} = s'\Bigr]\Bigr)\\ &= \sum_{s', r} p(s', r | s, a)\Biggl(r + \gamma \sum_{a'} \pi(a' | s') \Bigl(\E_\pi\Bigl[G_{t+1} | S_{t+1} = s', A_{t+1} = a'\Bigr]\Bigr)\Biggr)\\ &= \sum_{s', r} p(s', r | s, a)\Biggl(r + \gamma \sum_{a'} \pi(a' | s') q(s', a')\Biggr) \end{aligned}

The last equation in this chain is a Bellman equation for action value function.

Intuition

To find the value of a state-action pair (s,a)(s, a), you:

  1. Consider all possible next states sβ€²s' and rewards rr, weighted by their likelihood p(sβ€²,r∣s,a)p(s', r | s, a);
  2. For each of these outcomes, you take the immediate reward rr you get plus the discounted value of the next state;
  3. To compute the value of the next state sβ€²s', for all actions aβ€²a' possible from state sβ€²s', multiply the action value q(sβ€²,aβ€²)q(s', a') by the probability of choosing aβ€²a' in state sβ€²s' under current policy Ο€(aβ€²βˆ£sβ€²\pi(a' | s'. Then, sum up everything to receive the final value.

By summing all these possibilities together, you get the total expected value of the state-action pair (s,a)(s, a) under your current policy.

question mark

Which of the following best describes the function of the Bellman equation?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 3. ChapterΒ 2
We're sorry to hear that something went wrong. What happened?
some-alt