Coursera
Coursera Logo

Prediction and Control with Function Approximation 

  • Offered byCoursera
  • Public/Government Institute

Prediction and Control with Function Approximation
 at 
Coursera 
Overview

Duration

22 hours

Total fee

Free

Mode of learning

Online

Difficulty level

Intermediate

Official Website

Explore Free Course External Link Icon

Credential

Certificate

Prediction and Control with Function Approximation
Table of content
Accordion Icon V3
  • Overview
  • Highlights
  • Course Details
  • Curriculum

Prediction and Control with Function Approximation
 at 
Coursera 
Highlights

  • Shareable Certificate Earn a Certificate upon completion
  • 100% online Start instantly and learn at your own schedule.
  • Course 3 of 4 in the Reinforcement Learning Specialization
  • Flexible deadlines Reset deadlines in accordance to your schedule.
  • Intermediate Level Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
  • Approx. 22 hours to complete
  • English Subtitles: Arabic, French, Portuguese (European), Italian, Vietnamese, German, Russian, English, Spanish
Read more
Details Icon

Prediction and Control with Function Approximation
 at 
Coursera 
Course details

Skills you will learn
More about this course
  • In this course, you will learn how to solve problems with large, high-dimensional, and potentially infinite state spaces. You will see that estimating value functions can be cast as a supervised learning problem---function approximation---allowing you to build agents that carefully balance generalization and discrimination in order to maximize reward. We will begin this journey by investigating how our policy evaluation or prediction methods like Monte Carlo and TD can be extended to the function approximation setting. You will learn about feature construction techniques for RL, and representation learning via neural networks and backprop. We conclude this course with a deep-dive into policy gradient methods; a way to learn policies directly without learning a value function. In this course you will solve two continuous-state control tasks and investigate the benefits of policy gradient methods in a continuous-action environment.
  • Prerequisites: This course strongly builds on the fundamentals of Courses 1 and 2, and learners should have completed these before starting this course. Learners should also be comfortable with probabilities & expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), and implementing algorithms from pseudocode.
  • By the end of this course, you will be able to:
  • -Understand how to use supervised learning approaches to approximate value functions
  • -Understand objectives for prediction (value estimation) under function approximation
  • -Implement TD with function approximation (state aggregation), on an environment with an infinite state space (continuous state space)
  • -Understand fixed basis and neural network approaches to feature construction
  • -Implement TD with neural network function approximation in a continuous state environment
  • -Understand new difficulties in exploration when moving to function approximation
  • -Contrast discounted problem formulations for control versus an average reward problem formulation
  • -Implement expected Sarsa and Q-learning with function approximation on a continuous state control task
  • -Understand objectives for directly estimating policies (policy gradient objectives)
  • -Implement a policy gradient method (called Actor-Critic) on a discrete state environment
Read more

Prediction and Control with Function Approximation
 at 
Coursera 
Curriculum

Welcome to the Course!

Course 3 Introduction

Meet your instructors!

Read Me: Pre-requisites and Learning Objectives

Reinforcement Learning Textbook

Moving to Parameterized Functions

Generalization and Discrimination

Framing Value Estimation as Supervised Learning

The Value Error Objective

Introducing Gradient Descent

Gradient Monte for Policy Evaluation

State Aggregation with Monte Carlo

Semi-Gradient TD for Policy Evaluation

Comparing TD and Monte Carlo with State Aggregation

Doina Precup: Building Knowledge for AI Agents with Reinforcement Learning

The Linear TD Update

The True Objective for TD

Week 1 Summary

Module 1 Learning Objectives

Weekly Reading: On-policy Prediction with Approximation

On-policy Prediction with Approximation

Constructing Features for Prediction

Coarse Coding

Generalization Properties of Coarse Coding

Tile Coding

Using Tile Coding in TD

What is a Neural Network?

Non-linear Approximation with Neural Networks

Deep Neural Networks

Gradient Descent for Training Neural Networks

Optimization Strategies for NNs

David Silver on Deep Learning + RL = AI?

Week 2 Review

Module 2 Learning Objectives

Weekly Reading: On-policy Prediction with Approximation II

Constructing Features for Prediction

Control with Approximation

Episodic Sarsa with Function Approximation

Episodic Sarsa in Mountain Car

Expected Sarsa with Function Approximation

Exploration under Function Approximation

Average Reward: A New Way of Formulating Control Problems

Satinder Singh on Intrinsic Rewards

Week 3 Review

Module 3 Learning Objectives

Weekly Reading: On-policy Control with Approximation

Control with Approximation

Policy Gradient

Learning Policies Directly

Advantages of Policy Parameterization

The Objective for Learning Policies

The Policy Gradient Theorem

Estimating the Policy Gradient

Actor-Critic Algorithm

Actor-Critic with Softmax Policies

Demonstration with Actor-Critic

Gaussian Policies for Continuous Actions

Week 4 Summary

Congratulations! Course 4 Preview

Module 4 Learning Objectives

Weekly Reading: Policy Gradient Methods

Policy Gradient Methods

Other courses offered by Coursera

– / –
3 months
Beginner
– / –
20 hours
Beginner
– / –
2 months
Beginner
– / –
3 months
Beginner
View Other 6726 CoursesRight Arrow Icon
qna

Prediction and Control with Function Approximation
 at 
Coursera 

Student Forum

chatAnything you would want to ask experts?
Write here...