Curiosity Driven Deep Reinforcement Learning
Curiosity Driven Deep Reinforcement Learning, available at $54.99, has an average rating of 4.7, with 27 lectures, based on 113 reviews, and has 1429 subscribers.
You will learn about How to Code A3C Agents How to Do Parallel Processing in Python How to Implement Deep Reinforcement Learning Papers How to Code the Intrinsic Curiosity Module This course is ideal for individuals who are This course is for advanced students of deep reinforcement learning It is particularly useful for This course is for advanced students of deep reinforcement learning.
Enroll now: Curiosity Driven Deep Reinforcement Learning
Summary
Title: Curiosity Driven Deep Reinforcement Learning
Price: $54.99
Average Rating: 4.7
Number of Lectures: 27
Number of Published Lectures: 27
Number of Curriculum Items: 27
Number of Published Curriculum Objects: 27
Original Price: $199.99
Quality Status: approved
Status: Live
What You Will Learn
- How to Code A3C Agents
- How to Do Parallel Processing in Python
- How to Implement Deep Reinforcement Learning Papers
- How to Code the Intrinsic Curiosity Module
Who Should Attend
- This course is for advanced students of deep reinforcement learning
Target Audiences
- This course is for advanced students of deep reinforcement learning
If reinforcement learning is to serve as a viable path to artificial general intelligence, it must learn to cope with environments with sparse or totally absent rewards. Most real life systems provided rewards that only occur after many time steps, leaving the agent with little information to build a successful policy on. Curiosity based reinforcement learning solves this problem by giving the agent an innate sense of curiosity about its world, enabling it to explore and learn successful policies for navigating the world.
In this advanced course on deep reinforcement learning, motivated students will learn how to implement cutting edge artificial intelligence research papers from scratch. This is a fast paced course for those that are experienced in coding up actor critic agents on their own. We’ll code up two papers in this course, using the popular PyTorch framework.
The first paper covers asynchronous methods for deep reinforcement learning; also known as the popular asynchronous advantage actor critic algorithm (A3C). Here students will discover a new framework for learning that doesn’t require a GPU. We will learn how to implement multithreading in Python and use that to train multiple actor critic agents in parallel. We will go beyond the basic implementation from the paper and implement a recent improvement to reinforcement learning known as generalized advantage estimation. We will test our agents in the Pong environment from the Open AI Gym’s Atari library,and achieve nearly world class performance in just a few hours.
From there, we move on to the heart of the course: learning in environments with sparse or totally absent rewards. This new paradigm leverages the agent’s curiosity about the environment as an intrinsic reward that motivates the agent to explore and learn generalizable skills. We’ll implement the intrinsic curiosity module (ICM),which is a bolt-on module for any deep reinforcement learning algorithm. We will train and test our agent in an maze like environment that only yields rewards when the agent reaches the objective. A clear performance gain over the vanilla A3C algorithm will be demonstrated, conclusively showing the power of curiosity driven deep reinforcement learning.
Please keep in mind this is a fast paced course for motivated and advanced students. There will be only a very brief review of the fundamental concepts of reinforcement learning and actor critic methods, and from there we will jump right into reading and implementing papers.
The beauty of both the ICM and asynchronous methods is that these paradigms can be applied to nearly any other reinforcement learning algorithm. Both are highly adaptable and can be plugged in with little modification to algorithms like proximal policy optimization, soft actor critic, or deep Q learning.
Students will learn how to:
-
Implement deep reinforcement learning papers
-
Leverage multi core CPUs with parallel processing in Python
-
Code the A3C algorithm from scratch
-
Code the ICM from first principles
-
Code generalized advantage estimation
-
Modify the Open AI Gym Atari Library
-
Write extensible modular code
This course is launching with the PyTorch implementation, with a Tensorflow 2 version coming.
I’ll see you on the inside.
Course Curriculum
Chapter 1: Introduction
Lecture 1: What You Will Learn in this Course
Lecture 2: How to Succeed in this Course
Lecture 3: Required Background, Software, and Hardware
Chapter 2: Fundamental Concepts
Lecture 1: A Brief Review of Deep Reinforcement Learning and Actor Critic Methods
Lecture 2: Code Review of Basic Actor Critic Agent
Lecture 3: A Crash Course in Asynchronous Advantage Actor Critic Methods
Lecture 4: Our Code Structure
Chapter 3: Paper Analysis: Asynchronous Methods for Deep Reinforcement Learning
Lecture 1: How to Read and Implement Research Papers
Lecture 2: A3C Paper: Abstract and Introduction
Lecture 3: Crash Course in Parallel Processing in Python
Lecture 4: A3C Paper: Related Work, Reinforcement Learning Background
Lecture 5: A3C Paper: The Asynchronous Reinforcement Learning Framework
Lecture 6: Coding our Actor Critic Network
Lecture 7: Learning with Generalized Advantage Estimation
Lecture 8: Coding a Minimalist Replay Memory
Lecture 9: Coding the Shared Adam Optimizer
Lecture 10: A3C Paper: Experiments and Discussion
Lecture 11: How to Modify the Open AI Gym Atari Environments
Lecture 12: Coding Our Main Loop and Evaluating Our Agent
Chapter 4: Paper Analysis: Curiosity Driven Exploration by Self Supervised Prediction
Lecture 1: Paper Overview
Lecture 2: ICM Paper: Abstract and Introduction
Lecture 3: ICM Paper: Curiosity Driven Exploration
Lecture 4: Experimental Setup and Coding Our ICM Module
Lecture 5: ICM Paper: Experiments, Related Work, and Discussion
Lecture 6: Setting Up the Mini World and Training Our ICM Agent
Chapter 5: Appendix
Lecture 1: Setting Up Our Virtual Environment for the New Open AI Gym
Lecture 2: Making Our Agents Compliant with the New Gym Interface
Instructors
-
Phil Tabor
Machine Learning Engineer
Rating Distribution
- 1 stars: 0 votes
- 2 stars: 0 votes
- 3 stars: 4 votes
- 4 stars: 29 votes
- 5 stars: 80 votes
Frequently Asked Questions
How long do I have access to the course materials?
You can view and review the lecture materials indefinitely, like an on-demand channel.
Can I take my courses with me wherever I go?
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!
You may also like
- Top 10 Video Editing Courses to Learn in November 2024
- Top 10 Music Production Courses to Learn in November 2024
- Top 10 Animation Courses to Learn in November 2024
- Top 10 Digital Illustration Courses to Learn in November 2024
- Top 10 Renewable Energy Courses to Learn in November 2024
- Top 10 Sustainable Living Courses to Learn in November 2024
- Top 10 Ethical AI Courses to Learn in November 2024
- Top 10 Cybersecurity Fundamentals Courses to Learn in November 2024
- Top 10 Smart Home Technology Courses to Learn in November 2024
- Top 10 Holistic Health Courses to Learn in November 2024
- Top 10 Nutrition And Diet Planning Courses to Learn in November 2024
- Top 10 Yoga Instruction Courses to Learn in November 2024
- Top 10 Stress Management Courses to Learn in November 2024
- Top 10 Mindfulness Meditation Courses to Learn in November 2024
- Top 10 Life Coaching Courses to Learn in November 2024
- Top 10 Career Development Courses to Learn in November 2024
- Top 10 Relationship Building Courses to Learn in November 2024
- Top 10 Parenting Skills Courses to Learn in November 2024
- Top 10 Home Improvement Courses to Learn in November 2024
- Top 10 Gardening Courses to Learn in November 2024