Difference between revisions of "IU:Template"

From IU
Jump to navigation Jump to search
Line 138: Line 138:
 
# How to use optimal values functions to get optimal policies?
 
# How to use optimal values functions to get optimal policies?
 
# How to implement an efficient dynamic programming agent?
 
# How to implement an efficient dynamic programming agent?
  +
=== Section 2 ===
  +
  +
==== Section title ====
  +
Sample Based Learning
  +
  +
==== Topics covered in this section ====
  +
* Monte Carlo Methods for Prediction and Control
  +
* Temporal Difference Learning
  +
* Planning Learning and Acting
  +
* Expected Sarsa
  +
* Q-Learning
  +
* On-policy Off-policy Control
  +
  +
==== What forms of evaluation were used to test students’ performance in this section? ====
  +
{| class="wikitable"
  +
|+
  +
|-
  +
! Form !! Yes/No
  +
|-
  +
| Development of individual parts of software product code || 1
  +
|-
  +
| Homework and group projects || 1
  +
|-
  +
| Midterm evaluation || 1
  +
|-
  +
| Testing (written or computer based) || 1
  +
|-
  +
| Reports || 0
  +
|-
  +
| Essays || 0
  +
|-
  +
| Oral polls || 0
  +
|-
  +
| Discussions || 1
  +
|}
  +
  +
==== Typical questions for ongoing performance evaluation within this section ====
  +
# How to estimate value functions and optimal policies, using only sampled experience from the environment?
  +
# What is Monte Carlo?
  +
# What is off-policy?
  +
# What is Temporal Difference Learning?
  +
# What is Q-Learning?
  +
# What is Expected Sarsa?
  +
# What is model-based RL?
  +
# What is Random-Tabular Q-planning?
  +
  +
==== Typical questions for seminar classes (labs) within this section ====
  +
# How to use Monte Carlo for Prediction and Action values?
  +
# How to use Monte Carlo for generalized policy iteration?
  +
# What is Batch RL and how does it work?
  +
# How to implement Expected Sarsa and Q-Learning?
  +
# What is Dyna Architecture and Dyna Algorithm?
  +
  +
==== Tasks for midterm assessment within this section ====
  +
# Given a Q-Learning algorithm,
  +
# Draw the one-step backup diagram of the algorithm and write out its update rule
  +
# Is this algorithm on-policy or off-policy? Justify your answer.
  +
# Write the two-step version of the algorithm.
  +
  +
==== Test questions for final assessment in this section ====
  +
# Why does off-policy matter?
  +
# How to learn from agent’s interaction with the world?
  +
# What is the difference between methods of on-policy and off-policy control?
  +
# How is Q-Learning off-policy?

Revision as of 12:33, 17 November 2021

Reinforcement Learning

  • Course name: Reinforcement Learning
  • Course number: R-01

Course Characteristics

Key concepts of the class

  • Fundamentals of Reinforcement Learning
  • Sample-based Learning Methods
  • Prediction and Control with Function Approximation

What is the purpose of this course?

Harnessing the full potential of artificial intelligence requires adaptive learning systems. Reinforcement learning (RL) is one powerful paradigm for doing so, and it is relevant to an enormous range of tasks, including robotics, game playing, consumer modeling and healthcare.

Course objectives based on Bloom’s taxonomy

- What should a student remember at the end of the course?

By the end of the course, the students should be able to

  • Markov Decision Processes
  • Exploration vs. Exploitation
  • Value Functions
  • Temporal-difference Learning
  • Q-learning
  • Expected Sarsa
  • Actor-Critic

- What should a student be able to understand at the end of the course?

By the end of the course, the students should be able to

  • How to build an RL system for sequential decision making
  • How to formalize a task as an RL problem
  • the space of RL algorithms

- What should a student be able to apply at the end of the course?

By the end of the course, the students should be able to

  • RL for solving real-world problems
  • TD-algorithms for estimating value functions
  • Expected Sarsa and Q-Learning
  • Actor-Critic Method

Course evaluation

Course grade breakdown
Type Points
Labs/seminar classes 20
Interim performance assessment 50
Exams 30

Grades range

Course grading range
Grade Points
A. Excellent [85, 100]
B. Good [70, 84]
C. Satisfactory [55, 69]
D. Poor [0, 54]

Resources and reference material

  • Reinforcement Learning: An Introduction, Sutton and Barto, 2nd Edition.
  • Reinforcement Learning: State-of-the-Art, Marco Wiering and Martijn van Otterlo, Eds

Course Sections

The main sections of the course and approximate hour distribution between them is as follows:

Course Sections
Section Section Title Teaching Hours
1 Fundamentals of RL
2 Sample based Learning
3 Prediction and Control with Function Approximation

Section 1

Section title

Fundamentals of Reinforcement Learning

Topics covered in this section

  • Sequential Decision Making
  • Markov Decision Processes
  • Value Functions & Bellman Equations
  • Dynamic Programming for Value Function

What forms of evaluation were used to test students’ performance in this section?

Form Yes/No
Development of individual parts of software product code 1
Homework and group projects 1
Midterm evaluation 0
Testing (written or computer based) 1
Reports 0
Essays 0
Oral polls 0
Discussions 1

Typical questions for ongoing performance evaluation within this section

  1. What is sequential decision making?
  2. What is exploration vs. exploitation trade-off in sequential decision making?
  3. What are Markov Decision Processes?
  4. What is the difference between episodic and continuous tasks?
  5. What are policies, value functions and Bellman equations?
  6. How to use dynamic programming to compute value functions and optimal policies?

Typical questions for seminar classes (labs) within this section

  1. What are the strengths and weaknesses of different exploration algorithms?
  2. What is an epsilon greedy agent?
  3. How to translate a real-world problem into a Markov Decision Process?
  4. Why Bellman equations?
  5. What is generalized policy iterations?

Tasks for midterm assessment within this section

  1. Suppose you are given two action-value functions corresponding to the action-value function of an arbitrary, fixed policy under the two reward functions. Using the Bellman equation, explain if it is possible or not to combine these value functions in a simple manner to obtain a new action-value function corresponding to a single reward function r.

Test questions for final assessment in this section

  1. How to implement incremental algorithms for estimating action-values?
  2. How to implement and test an epsilon-greedy agent?
  3. Create an example of your own that will fit into Markov Decision Processes framework
  4. How to use optimal values functions to get optimal policies?
  5. How to implement an efficient dynamic programming agent?

Section 2

Section title

Sample Based Learning

Topics covered in this section

  • Monte Carlo Methods for Prediction and Control
  • Temporal Difference Learning
  • Planning Learning and Acting
  • Expected Sarsa
  • Q-Learning
  • On-policy Off-policy Control

What forms of evaluation were used to test students’ performance in this section?

Form Yes/No
Development of individual parts of software product code 1
Homework and group projects 1
Midterm evaluation 1
Testing (written or computer based) 1
Reports 0
Essays 0
Oral polls 0
Discussions 1

Typical questions for ongoing performance evaluation within this section

  1. How to estimate value functions and optimal policies, using only sampled experience from the environment?
  2. What is Monte Carlo?
  3. What is off-policy?
  4. What is Temporal Difference Learning?
  5. What is Q-Learning?
  6. What is Expected Sarsa?
  7. What is model-based RL?
  8. What is Random-Tabular Q-planning?

Typical questions for seminar classes (labs) within this section

  1. How to use Monte Carlo for Prediction and Action values?
  2. How to use Monte Carlo for generalized policy iteration?
  3. What is Batch RL and how does it work?
  4. How to implement Expected Sarsa and Q-Learning?
  5. What is Dyna Architecture and Dyna Algorithm?

Tasks for midterm assessment within this section

  1. Given a Q-Learning algorithm,
  2. Draw the one-step backup diagram of the algorithm and write out its update rule
  3. Is this algorithm on-policy or off-policy? Justify your answer.
  4. Write the two-step version of the algorithm.

Test questions for final assessment in this section

  1. Why does off-policy matter?
  2. How to learn from agent’s interaction with the world?
  3. What is the difference between methods of on-policy and off-policy control?
  4. How is Q-Learning off-policy?