Course Content
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Episodes and Returns
The Length of a Task
RL tasks are typically categorized as episodic or continuous, depending on how the learning process is structured over time.
Episode is a complete sequence of interactions between the agent and the environment, starting from an initial state and progressing through a series of transitions until a terminal state is reached.
Episodic tasks are those that consist of a finite sequence of states, actions, and rewards, where the agent's interaction with the environment is divided into distinct episodes.
In contrast, continuous tasks do not have a clear end to each interaction cycle. The agent continually interacts with the environment without resetting to an initial state, and the learning process is ongoing, often without a distinct terminal point.
Return
You already know that the agent's main goal is to maximize cumulative rewards. While the reward function provides instantaneous rewards, it doesn't account for future outcomes, which can be problematic. An agent trained solely to maximize immediate rewards may overlook long-term benefits. To address this issue, let's introduce a concept of return.
Return is the total accumulated reward that an agent receives from a given state onward, which incorporates all the rewards it will receive in the future, not just immediately.
The return is a better representation of how good a particular state or action is in the long run. The goal of reinforcement learning can now be defined as maximizing the return.
If is the final time step, the formula of a return looks like this:
Discounting
While simple return serves as a good target in episodic tasks, in continuous tasks a problem arises. If the number of time steps is infinite, the return itself can be infinite. To handle this, a discount factor is used to ensure that future rewards are given less weight, preventing the return from becoming infinite.
Discount factor is a multiplicative factor used to determine the present value of future rewards. It ranges between 0 and 1, where a value closer to 0 makes the agent prioritize immediate rewards, while a value closer to 1 makes the agent consider future rewards more significantly.
Return combined with a discount factor is called discounted return.
The formula for discounted return looks like this:
Even in episodic tasks, using a discount factor offers practical benefits: it motivates the agent to reach its goal as quickly as possible, leading to more efficient behavior. For this reason, discounting is commonly applied even in clearly episodic settings.
Thanks for your feedback!