Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Episodes and Returns | RL Core Theory
Introduction to Reinforcement Learning
course content

Course Content

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning

1. RL Core Theory
2. Multi-Armed Bandit Problem
3. Dynamic Programming
4. Monte Carlo Methods
5. Temporal Difference Learning

book
Episodes and Returns

The Length of a Task

RL tasks are typically categorized as episodic or continuous, depending on how the learning process is structured over time.

Note
Definition

Episode is a complete sequence of interactions between the agent and the environment, starting from an initial state and progressing through a series of transitions until a terminal state is reached.

Episodic tasks are those that consist of a finite sequence of states, actions, and rewards, where the agent's interaction with the environment is divided into distinct episodes.

In contrast, continuous tasks do not have a clear end to each interaction cycle. The agent continually interacts with the environment without resetting to an initial state, and the learning process is ongoing, often without a distinct terminal point.

Return

You already know that the agent's main goal is to maximize cumulative rewards. While the reward function provides instantaneous rewards, it doesn't account for future outcomes, which can be problematic. An agent trained solely to maximize immediate rewards may overlook long-term benefits. To address this issue, let's introduce a concept of return.

Note
Definition

Return GG is the total accumulated reward that an agent receives from a given state onward, which incorporates all the rewards it will receive in the future, not just immediately.

The return is a better representation of how good a particular state or action is in the long run. The goal of reinforcement learning can now be defined as maximizing the return.

If TT is the final time step, the formula of a return looks like this:

Gt=Rt+1+Rt+2+Rt+3+...+RTG_t = R_{t+1} + R_{t+2} + R_{t+3} + ... + R_T

Discounting

While simple return serves as a good target in episodic tasks, in continuous tasks a problem arises. If the number of time steps is infinite, the return itself can be infinite. To handle this, a discount factor is used to ensure that future rewards are given less weight, preventing the return from becoming infinite.

Note
Definition

Discount factor Ξ³\gamma is a multiplicative factor used to determine the present value of future rewards. It ranges between 0 and 1, where a value closer to 0 makes the agent prioritize immediate rewards, while a value closer to 1 makes the agent consider future rewards more significantly.

Return combined with a discount factor is called discounted return.

The formula for discounted return looks like this:

Gt=Rt+1+Ξ³Rt+2+Ξ³2Rt+3+...=βˆ‘k=0∞γkRt+k+1G_t = R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3} + ... = \sum_{k=0}^\infty \gamma^k R_{t+k+1}
Note
Study More

Even in episodic tasks, using a discount factor offers practical benefits: it motivates the agent to reach its goal as quickly as possible, leading to more efficient behavior. For this reason, discounting is commonly applied even in clearly episodic settings.

question mark

What does the discount factor Ξ³\gamma represent?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 4

Ask AI

expand
ChatGPT

Ask anything or try one of the suggested questions to begin our chat

course content

Course Content

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning

1. RL Core Theory
2. Multi-Armed Bandit Problem
3. Dynamic Programming
4. Monte Carlo Methods
5. Temporal Difference Learning

book
Episodes and Returns

The Length of a Task

RL tasks are typically categorized as episodic or continuous, depending on how the learning process is structured over time.

Note
Definition

Episode is a complete sequence of interactions between the agent and the environment, starting from an initial state and progressing through a series of transitions until a terminal state is reached.

Episodic tasks are those that consist of a finite sequence of states, actions, and rewards, where the agent's interaction with the environment is divided into distinct episodes.

In contrast, continuous tasks do not have a clear end to each interaction cycle. The agent continually interacts with the environment without resetting to an initial state, and the learning process is ongoing, often without a distinct terminal point.

Return

You already know that the agent's main goal is to maximize cumulative rewards. While the reward function provides instantaneous rewards, it doesn't account for future outcomes, which can be problematic. An agent trained solely to maximize immediate rewards may overlook long-term benefits. To address this issue, let's introduce a concept of return.

Note
Definition

Return GG is the total accumulated reward that an agent receives from a given state onward, which incorporates all the rewards it will receive in the future, not just immediately.

The return is a better representation of how good a particular state or action is in the long run. The goal of reinforcement learning can now be defined as maximizing the return.

If TT is the final time step, the formula of a return looks like this:

Gt=Rt+1+Rt+2+Rt+3+...+RTG_t = R_{t+1} + R_{t+2} + R_{t+3} + ... + R_T

Discounting

While simple return serves as a good target in episodic tasks, in continuous tasks a problem arises. If the number of time steps is infinite, the return itself can be infinite. To handle this, a discount factor is used to ensure that future rewards are given less weight, preventing the return from becoming infinite.

Note
Definition

Discount factor Ξ³\gamma is a multiplicative factor used to determine the present value of future rewards. It ranges between 0 and 1, where a value closer to 0 makes the agent prioritize immediate rewards, while a value closer to 1 makes the agent consider future rewards more significantly.

Return combined with a discount factor is called discounted return.

The formula for discounted return looks like this:

Gt=Rt+1+Ξ³Rt+2+Ξ³2Rt+3+...=βˆ‘k=0∞γkRt+k+1G_t = R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3} + ... = \sum_{k=0}^\infty \gamma^k R_{t+k+1}
Note
Study More

Even in episodic tasks, using a discount factor offers practical benefits: it motivates the agent to reach its goal as quickly as possible, leading to more efficient behavior. For this reason, discounting is commonly applied even in clearly episodic settings.

question mark

What does the discount factor Ξ³\gamma represent?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 4
We're sorry to hear that something went wrong. What happened?
some-alt