On-Policy Monte Carlo Control
Swipe to show menu
The idea behind on-policy methods is intuitive: an agent learns by following its current policy and improves that policy based on the outcomes it experiences. To discover better actions and avoid getting stuck in suboptimal behavior, the agent incorporates a degree of randomness โ occasionally trying alternative actions to encourage exploration.
Analogy
Imagine you're at an ice cream shop and there are three flavors available: chocolate, vanilla, and strawberry. You love chocolate, so that's what you usually pick. But one day, out of curiosity, you decide to try strawberry instead. Turns out, the strawberry ice cream at this shop is incredibly tasty, and you decide to pick it whenever you visit this shop.
Trying new flavor wasn't necessarily the most logical choice based on past experience, but it gave you a chance to discover something new. And this kind of exploration lies at the core of on-policy methods.
Stochastic Policies
Formally, adopting this idea means replacing the deterministic (hard) policies used in dynamic programming with stochastic (soft) policies, denoted as ฯ(aโฃs), where:
ฯ(aโฃs)>0โsโS,aโA(s)In other words, every action in every state has a non-zero probability of being selected. This ensures that all parts of the environment can eventually be explored, which is essential when learning from experience.
ฮต-greedy Policies
To incorporate exploration into the policy, let's borrow the concept of ฮต-greedy exploration from the multi-armed bandit problem. This allows us to define a stochastic policy that balances exploiting the best-known action with exploring alternatives:
ฯ(aโฃs)โโฉโจโงโ1โฮต+โฃA(s)โฃฮตโโฃA(s)โฃฮตโโifย a=aโฒargmaxโqฯโ(s,aโฒ)otherwiseโThis policy behaves greedily most of the time โ choosing the action with the highest estimated value โ but with probability ฮต, it selects a random action, ensuring that all actions have a non-zero chance of being taken (even the greedy one again, via uniform sampling).
At first glance, this approach seems problematic: since the policy never becomes purely greedy, it will never converge to the exact optimal policy. As such, it doesn't strictly satisfy the conditions for GPI if we expect the exact optimality in the limit.
However, GPI doesn't require the policy to become optimal immediately โ it only requires that each policy improves (or stays the same) compared to the last, progressively moving toward optimality. The ฮต-greedy policy satisfies this condition: it improves the policy on average, and ensures ongoing exploration to support better estimates.
To address the issue of convergence to the truly optimal policy, we can gradually reduce ฮต over time. This strategy lets the policy become increasingly greedy as learning progresses. In the early stages, exploration helps gather diverse experience, while in later stages, the agent exploits its improved knowledge. With a properly decaying ฮต, the method does converge to an optimal policy in the limit.
Pseudocode
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat