Course Content
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Bellman Equations
To clarify the definition:
- A functional equation is an equation whose solution is a function. For Bellman equation, this solution is the value function for which the equation was formulated;
- A recursive form means that the value at the current state is expressed in terms of values at future states.
In short, solving the Bellman equation gives the desired value function, and deriving this equation requires identifying a recursive relationship between current and future states.
State Value Function
As a reminder, here is a state value function in compact form:
To obtain the Bellman equation for this value function, let's expand the right side of the equation and establish a recursive relationship:
The last equation in this chain is a Bellman equation for state value function.
Intuition
To find the value of a state , you:
- Consider all possible actions you might take from this state, each weighted by how likely you are to choose that action under your current policy ;
- For each action , you consider all possible next states and rewards , weighted by their likelihood ;
- For each of these outcomes, you take the immediate reward you get plus the discounted value of the next state .
By summing all these possibilities together, you get the total expected value of the state under your current policy.
Action Value Function
Here is an action value function in compact form:
The derivation of Bellman equation for this function is quite similar to the previous one:
The last equation in this chain is a Bellman equation for action value function.
Intuition
To find the value of a state-action pair , you:
- Consider all possible next states and rewards , weighted by their likelihood ;
- For each of these outcomes, you take the immediate reward you get plus the discounted value of the next state;
- To compute the value of the next state , for all actions possible from state , multiply the action value by the probability of choosing in state under current policy . Then, sum up everything to receive the final value.
By summing all these possibilities together, you get the total expected value of the state-action pair under your current policy.
Thanks for your feedback!