Policy Improvement
メニューを表示するにはスワイプしてください
Policy improvement is a process of improving the policy based on current value function estimates.
Like with policy evaluation, policy improvement can work with both state value function and action value function. But for DP methods, state value function will be used.
Now that you can estimate state value function for any policy, a natural next step is to explore whether there are any policies better than the current one. One way of doing this, is to consider taking a different action a in a state s, and to follow the current policy afterwards. If this sounds familiar, it's because this is similar to how we define the action value function:
qπ(s,a)=s′,r∑p(s′,r∣s,a)(r+γvπ(s′))If this new value is greater than the original state value vπ(s), it indicates that taking action a in state s and then continuing with policy π leads to better outcomes than strictly following policy π. Since states are independent, it's optimal to always select action a whenever state s is encountered. Therefore, we can construct an improved policy π′, identical to π except that it selects action a in state s, which would be superior to the original policy π.
Policy Improvement Theorem
The reasoning described above can be generalized as the policy improvement theorem:
⟹qπ(s,π′(s))≥vπ(s)vπ′(s)≥vπ(s)∀s∈S∀s∈SThe proof of this theorem is relatively simple, and can be achieved by a repeated substitution:
vπ(s)≤qπ(s,π′(s))=Eπ′[Rt+1+γvπ(St+1)∣St=s]≤Eπ′[Rt+1+γqπ(St+1,π′(St+1))∣St=s]=Eπ′[Rt+1+γEπ′[Rt+2+γvπ(St+2)]∣St=s]=Eπ′[Rt+1+γRt+2+γ2vπ(St+2)∣St=s]...≤Eπ′[Rt+1+γRt+2+γ2Rt+3+...∣St=s]=vπ′(s)Improvement Strategy
While updating actions for certain states can lead to improvements, it's more effective to update actions for all states simultaneously. Specifically, for each state s, select the action a that maximizes the action value qπ(s,a):
π′(s)←aargmaxqπ(s,a)←aargmaxs′,r∑p(s′,r∣s,a)(r+γvπ(s′))where argmax (short for argument of the maximum) is an operator that returns the value of the variable that maximizes a given function.
The resulting greedy policy, denoted by π′, satisfies the conditions of the policy improvement theorem by construction, guaranteeing that π′ is at least as good as the original policy π, and typically better.
If π′ is as good as, but not better than π, then both π′ and π are optimal policies, as their value functions are equal, and satisfy Bellman optimality equation:
vπ(s)=amaxs′,r∑p(s′,r∣s,a)(r+γvπ(s′))フィードバックありがとうございます!
AIに質問する
AIに質問する
何でも質問するか、提案された質問の1つを試してチャットを始めてください