Зміст курсу
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
On-Policy Monte Carlo Control
The idea behind on-policy methods is intuitive: rather than always choosing the same action in a given state, we introduce a degree of randomness — occasionally trying different actions to encourage exploration.
Analogy
Imagine you're at an ice cream shop and there are three flavors available: chocolate, vanilla, and strawberry. You love chocolate, so that's what you usually pick. But one day, out of curiosity, you decide to try strawberry instead. Turns out, the strawberry ice cream at this shop is incredibly tasty, and you decide to pick it whenever you visit this shop.
Trying new flavor wasn't necessarily the most logical choice based on past experience, but it gave you a chance to discover something new. And this kind of exploration lies at the core of on-policy methods.
Stochastic Policies
Formally, adopting this idea means replacing the deterministic (hard) policies used in dynamic programming with stochastic (soft) policies, denoted as , where:
In other words, every action in every state has a non-zero probability of being selected. This ensures that all parts of the environment can eventually be explored, which is essential when learning from experience.
-greedy Policies
To incorporate exploration into the policy, let's borrow the concept of -greedy exploration from the multi-armed bandit problem. This allows us to define a stochastic policy that balances exploiting the best-known action with exploring alternatives:
This policy behaves greedily most of the time — choosing the action with the highest estimated value — but with probability , it selects a random action, ensuring that all actions have a non-zero chance of being taken (even the greedy one again, via uniform sampling).
At first glance, this approach seems problematic: since the policy never becomes purely greedy, it will never converge to the exact optimal policy. As such, it doesn't strictly satisfy the conditions for GPI if we expect the exact optimality in the limit.
However, GPI doesn't require the policy to become optimal immediately — it only requires that each policy improves (or stays the same) compared to the last, progressively moving toward optimality. The -greedy policy satisfies this condition: it improves the policy on average, and ensures ongoing exploration to support better estimates.
To address the issue of convergence to the truly optimal policy, we can gradually reduce over time. This strategy lets the policy become increasingly greedy as learning progresses. In the early stages, exploration helps gather diverse experience, while in later stages, the agent exploits its improved knowledge. With a properly decaying , the method does converge to an optimal policy in the limit.
Pseudocode
Дякуємо за ваш відгук!