Course Content
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Episodes and Returns
The Length of a Task
RL tasks are typically categorized as episodic or continuous, depending on how the learning process is structured over time.
Episodic tasks are those that consist of a finite sequence of states, actions, and rewards, where the agent's interaction with the environment is divided into distinct episodes.
In contrast, continuous tasks do not have a clear end to each interaction cycle. The agent continually interacts with the environment without resetting to an initial state, and the learning process is ongoing, often without a distinct terminal point.
Return
You already know that the agent's main goal is to maximize cumulative rewards. While the reward function provides instantaneous rewards, it doesn't account for future outcomes, which can be problematic. An agent trained solely to maximize immediate rewards may overlook long-term benefits. To address this issue, let's introduce a concept of return.
Return is usually denoted as .
The return is a better representation of how good a particular state or action is in the long run. The goal of reinforcement learning can now be defined as maximizing the return.
If is the final time step, the formula of a return looks like this:
Discounting
While simple return serves as a good target in episodic tasks, in continuous tasks a problem arises. If the number of time steps is infinite, the return itself can be infinite. To handle this, a discount factor is used to ensure that future rewards are given less weight, preventing the return from becoming infinite.
Discount factor is usually denoted as .
Return combined with a discount factor is called discounted return.
The formula for discounted return looks like this:
Thanks for your feedback!