Bellman Equations
メニューを表示するにはスワイプしてください
A Bellman equation is a functional equation that defines a value function in a recursive form.
To clarify the definition:
- A functional equation is an equation whose solution is a function. For Bellman equation, this solution is the value function for which the equation was formulated;
- A recursive form means that the value at the current state is expressed in terms of values at future states.
In short, solving the Bellman equation gives the desired value function, and deriving this equation requires identifying a recursive relationship between current and future states.
State Value Function
As a reminder, here is a state value function in compact form:
vπ(s)=Eπ[Gt∣St=s]To obtain the Bellman equation for this value function, let's expand the right side of the equation and establish a recursive relationship:
vπ(s)=Eπ[Gt∣St=s]=Eπ[Rt+1+γRt+2+γ2Rt+3+...∣St=s]=Eπ[Rt+1+γk=0∑∞γkRt+k+2∣St=s]=Eπ[Rt+1+γGt+1∣St=s]=a∑π(a∣s)s′,r∑p(s′,r∣s,a)(r+γEπ[Gt+1∣St+1=s′])=a∑π(a∣s)s′,r∑p(s′,r∣s,a)(r+γvπ(s′))The last equation in this chain is a Bellman equation for state value function.
Intuition
To find the value of a state s, you:
- Consider all possible actions a you might take from this state, each weighted by how likely you are to choose that action under your current policy π(a∣s);
- For each action a, you consider all possible next states s′ and rewards r, weighted by their likelihood p(s′,r∣s,a);
- For each of these outcomes, you take the immediate reward r you get plus the discounted value of the next state γvπ(s′).
By summing all these possibilities together, you get the total expected value of the state s under your current policy.
Action Value Function
Here is an action value function in compact form:
qπ(s,a)=Eπ[Gt∣St=s,At=a]The derivation of Bellman equation for this function is quite similar to the previous one:
qπ(s,a)=Eπ[Gt∣St=s,At=a]=Eπ[Rt+1+γRt+2+γ2Rt+3+...∣St=s,At=a]=Eπ[Rt+1+γk=0∑∞γkRt+k+2∣St=s,At=a]=Eπ[Rt+1+γGt+1∣St=s,At=a]=s′,r∑p(s′,r∣s,a)(r+γEπ[Gt+1∣St+1=s′])=s′,r∑p(s′,r∣s,a)(r+γa′∑π(a′∣s′)(Eπ[Gt+1∣St+1=s′,At+1=a′]))=s′,r∑p(s′,r∣s,a)(r+γa′∑π(a′∣s′)q(s′,a′))The last equation in this chain is a Bellman equation for action value function.
Intuition
To find the value of a state-action pair (s,a), you:
- Consider all possible next states s′ and rewards r, weighted by their likelihood p(s′,r∣s,a);
- For each of these outcomes, you take the immediate reward r you get plus the discounted value of the next state;
- To compute the value of the next state s′, for all actions a′ possible from state s′, multiply the action value q(s′,a′) by the probability of choosing a′ in state s′ under current policy π(a′∣s′. Then, sum up everything to receive the final value.
By summing all these possibilities together, you get the total expected value of the state-action pair (s,a) under your current policy.
フィードバックありがとうございます!
AIに質問する
AIに質問する
何でも質問するか、提案された質問の1つを試してチャットを始めてください