Unsupervised Deep Learning in Python
Unsupervised Deep Learning in Python, available at $27.99, has an average rating of 4.6, with 88 lectures, based on 2350 reviews, and has 23809 subscribers.
You will learn about Understand the theory behind principal components analysis (PCA) Know why PCA is useful for dimensionality reduction, visualization, de-correlation, and denoising Derive the PCA algorithm by hand Write the code for PCA Understand the theory behind t-SNE Use t-SNE in code Understand the limitations of PCA and t-SNE Understand the theory behind autoencoders Write an autoencoder in Theano and Tensorflow Understand how stacked autoencoders are used in deep learning Write a stacked denoising autoencoder in Theano and Tensorflow Understand the theory behind restricted Boltzmann machines (RBMs) Understand why RBMs are hard to train Understand the contrastive divergence algorithm to train RBMs Write your own RBM and deep belief network (DBN) in Theano and Tensorflow Visualize and interpret the features learned by autoencoders and RBMs Understand important foundations for OpenAI ChatGPT, GPT-4, DALL-E, Midjourney, and Stable Diffusion This course is ideal for individuals who are Students and professionals looking to enhance their deep learning repertoire or Students and professionals who want to improve the training capabilities of deep neural networks or Students and professionals who want to learn about the more modern developments in deep learning It is particularly useful for Students and professionals looking to enhance their deep learning repertoire or Students and professionals who want to improve the training capabilities of deep neural networks or Students and professionals who want to learn about the more modern developments in deep learning.
Enroll now: Unsupervised Deep Learning in Python
Summary
Title: Unsupervised Deep Learning in Python
Price: $27.99
Average Rating: 4.6
Number of Lectures: 88
Number of Published Lectures: 79
Number of Curriculum Items: 88
Number of Published Curriculum Objects: 79
Original Price: $27.99
Quality Status: approved
Status: Live
What You Will Learn
- Understand the theory behind principal components analysis (PCA)
- Know why PCA is useful for dimensionality reduction, visualization, de-correlation, and denoising
- Derive the PCA algorithm by hand
- Write the code for PCA
- Understand the theory behind t-SNE
- Use t-SNE in code
- Understand the limitations of PCA and t-SNE
- Understand the theory behind autoencoders
- Write an autoencoder in Theano and Tensorflow
- Understand how stacked autoencoders are used in deep learning
- Write a stacked denoising autoencoder in Theano and Tensorflow
- Understand the theory behind restricted Boltzmann machines (RBMs)
- Understand why RBMs are hard to train
- Understand the contrastive divergence algorithm to train RBMs
- Write your own RBM and deep belief network (DBN) in Theano and Tensorflow
- Visualize and interpret the features learned by autoencoders and RBMs
- Understand important foundations for OpenAI ChatGPT, GPT-4, DALL-E, Midjourney, and Stable Diffusion
Who Should Attend
- Students and professionals looking to enhance their deep learning repertoire
- Students and professionals who want to improve the training capabilities of deep neural networks
- Students and professionals who want to learn about the more modern developments in deep learning
Target Audiences
- Students and professionals looking to enhance their deep learning repertoire
- Students and professionals who want to improve the training capabilities of deep neural networks
- Students and professionals who want to learn about the more modern developments in deep learning
Ever wondered how AI technologies like OpenAI ChatGPT, GPT-4, DALL-E, Midjourney, and Stable Diffusion really work? In this course, you will learn the foundations of these groundbreaking applications.
This course is the next logical step in my deep learning, data science, and machine learning series. I’ve done a lot of courses about deep learning, and I just released a course about unsupervised learning, where I talked about clustering and density estimation. So what do you get when you put these 2 together? Unsupervised deep learning!
In these course we’ll start with some very basic stuff – principal components analysis (PCA), and a popular nonlinear dimensionality reduction technique known as t-SNE (t-distributed stochastic neighbor embedding).
Next, we’ll look at a special type of unsupervised neural network called the autoencoder. After describing how an autoencoder works, I’ll show you how you can link a bunch of them together to form a deep stack of autoencoders, that leads to better performance of a supervised deep neural network. Autoencoders are like a non-linear form of PCA.
Last, we’ll look at restricted Boltzmann machines (RBMs). These are yet another popular unsupervised neural network, that you can use in the same way as autoencoders to pretrain your supervised deep neural network. I’ll show you an interesting way of training restricted Boltzmann machines, known as Gibbs sampling, a special case of Markov Chain Monte Carlo, and I’ll demonstrate how even though this method is only a rough approximation, it still ends up reducing other cost functions, such as the one used for autoencoders. This method is also known as Contrastive Divergence or CD-k. As in physical systems, we define a concept called free energy and attempt to minimize this quantity.
Finally, we’ll bring all these concepts together and I’ll show you visually what happens when you use PCA and t-SNE on the features that the autoencoders and RBMs have learned, and we’ll see that even without labels the results suggest that a pattern has been found.
All the materials used in this course are FREE. Since this course is the 4th in the deep learning series, I will assume you already know calculus, linear algebra, and Python coding. You’ll want to install Numpy, Theano, and Tensorflow for this course. These are essential items in your data analytics toolbox.
If you are interested in deep learning and you want to learn about modern deep learning developments beyond just plain backpropagation, including using unsupervised neural networks to interpret what features can be automatically and hierarchically learned in a deep learning system, this course is for you.
This course focuses on “how to build and understand“, not just “how to use”. Anyone can learn to use an API in 15 minutes after reading some documentation. It’s not about “remembering facts”, it’s about “seeing for yourself” via experimentation. It will teach you how to visualize what’s happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.
“If you can’t implement it, you don’t understand it”
-
Or as the great physicist Richard Feynman said: “What I cannot create, I do not understand”.
-
My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch
-
Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code?
-
After doing the same thing with 10 datasets, you realize you didn’t learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times…
Suggested Prerequisites:
-
calculus
-
linear algebra
-
probability
-
Python coding: if/else, loops, lists, dicts, sets
-
Numpy coding: matrix and vector operations, loading a CSV file
-
can write a feedforward neural network in Theano or Tensorflow
WHAT ORDER SHOULD I TAKE YOUR COURSES IN?:
-
Check out the lecture “Machine Learning and AI Prerequisite Roadmap” (available in the FAQ of any of my courses, including the free Numpy course)
Course Curriculum
Chapter 1: Introduction and Outline
Lecture 1: Introduction and Outline
Lecture 2: How to Succeed in this Course
Lecture 3: Where to get the code and data
Lecture 4: Tensorflow or Theano – Your Choice!
Lecture 5: What are the practical applications of unsupervised deep learning?
Lecture 6: Where does this course fit into your deep learning studies?
Chapter 2: Principal Components Analysis
Lecture 1: What does PCA do?
Lecture 2: How does PCA work?
Lecture 3: Why does PCA work? (PCA derivation)
Lecture 4: PCA only rotates
Lecture 5: MNIST visualization, finding the optimal number of principal components
Lecture 6: PCA implementation
Lecture 7: PCA for NLP
Lecture 8: PCA objective function
Lecture 9: PCA Application: Naive Bayes
Lecture 10: SVD (Singular Value Decomposition)
Lecture 11: Suggestion Box
Chapter 3: t-SNE (t-distributed Stochastic Neighbor Embedding)
Lecture 1: t-SNE Theory
Lecture 2: t-SNE Visualization
Lecture 3: t-SNE on the Donut
Lecture 4: t-SNE on XOR
Lecture 5: t-SNE on MNIST
Chapter 4: Autoencoders
Lecture 1: Autoencoders
Lecture 2: Denoising Autoencoders
Lecture 3: Stacked Autoencoders
Lecture 4: Writing the autoencoder class in code (Theano)
Lecture 5: Testing our Autoencoder (Theano)
Lecture 6: Writing the deep neural network class in code (Theano)
Lecture 7: Autoencoder in Code (Tensorflow)
Lecture 8: Testing greedy layer-wise autoencoder training vs. pure backpropagation
Lecture 9: Cross Entropy vs. KL Divergence
Lecture 10: Deep Autoencoder Visualization Description
Lecture 11: Deep Autoencoder Visualization in Code
Lecture 12: An Autoencoder in 1 Line of Code
Chapter 5: Restricted Boltzmann Machines
Lecture 1: Basic Outline for RBMs
Lecture 2: Introduction to RBMs
Lecture 3: Motivation Behind RBMs
Lecture 4: Intractability
Lecture 5: Neural Network Equations
Lecture 6: Training an RBM (part 1)
Lecture 7: Training an RBM (part 2)
Lecture 8: Training an RBM (part 3) – Free Energy
Lecture 9: RBM Greedy Layer-Wise Pretraining
Lecture 10: RBM in Code (Theano) with Greedy Layer-Wise Training on MNIST
Lecture 11: RBM in Code (Tensorflow)
Chapter 6: The Vanishing Gradient Problem
Lecture 1: The Vanishing Gradient Problem Description
Lecture 2: The Vanishing Gradient Problem Demo in Code
Chapter 7: Applications to NLP (Natural Language Processing)
Lecture 1: Application of PCA and SVD to NLP (Natural Language Processing)
Lecture 2: Latent Semantic Analysis in Code
Lecture 3: Application of t-SNE + K-Means: Finding Clusters of Related Words
Chapter 8: Applications to Recommender Systems
Lecture 1: Recommender Systems Section Introduction
Lecture 2: Why Autoencoders and RBMs work
Lecture 3: Data Preparation and Logistics
Lecture 4: Data Preprocessing Code
Lecture 5: AutoRec
Lecture 6: AutoRec in Code
Lecture 7: Categorical RBM for Recommender System Ratings
Lecture 8: Recommender RBM Code pt 1
Lecture 9: Recommender RBM Code pt 2
Lecture 10: Recommender RBM Code pt 3
Lecture 11: Recommender RBM Code Speedup
Chapter 9: Theano and Tensorflow Basics Review
Lecture 1: (Review) Theano Basics
Lecture 2: (Review) Theano Neural Network in Code
Lecture 3: (Review) Tensorflow Basics
Lecture 4: (Review) Tensorflow Neural Network in Code
Chapter 10: Setting Up Your Environment (FAQ by Student Request)
Lecture 1: Pre-Installation Check
Lecture 2: Anaconda Environment Setup
Lecture 3: How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow
Chapter 11: Extra Help With Python Coding for Beginners (FAQ by Student Request)
Lecture 1: How to Code by Yourself (part 1)
Lecture 2: How to Code by Yourself (part 2)
Lecture 3: Proof that using Jupyter Notebook is the same as not using it
Lecture 4: Python 2 vs Python 3
Lecture 5: Is Theano Dead?
Chapter 12: Effective Learning Strategies for Machine Learning (FAQ by Student Request)
Lecture 1: How to Succeed in this Course (Long Version)
Lecture 2: Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced?
Lecture 3: Machine Learning and AI Prerequisite Roadmap (pt 1)
Lecture 4: Machine Learning and AI Prerequisite Roadmap (pt 2)
Chapter 13: Appendix / FAQ Finale
Lecture 1: What is the Appendix?
Lecture 2: BONUS
Instructors
-
Lazy Programmer Team
Artificial Intelligence and Machine Learning Engineer -
Lazy Programmer Inc.
Artificial intelligence and machine learning engineer
Rating Distribution
- 1 stars: 26 votes
- 2 stars: 31 votes
- 3 stars: 84 votes
- 4 stars: 789 votes
- 5 stars: 1420 votes
Frequently Asked Questions
How long do I have access to the course materials?
You can view and review the lecture materials indefinitely, like an on-demand channel.
Can I take my courses with me wherever I go?
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!
You may also like
- Top 10 Video Editing Courses to Learn in November 2024
- Top 10 Music Production Courses to Learn in November 2024
- Top 10 Animation Courses to Learn in November 2024
- Top 10 Digital Illustration Courses to Learn in November 2024
- Top 10 Renewable Energy Courses to Learn in November 2024
- Top 10 Sustainable Living Courses to Learn in November 2024
- Top 10 Ethical AI Courses to Learn in November 2024
- Top 10 Cybersecurity Fundamentals Courses to Learn in November 2024
- Top 10 Smart Home Technology Courses to Learn in November 2024
- Top 10 Holistic Health Courses to Learn in November 2024
- Top 10 Nutrition And Diet Planning Courses to Learn in November 2024
- Top 10 Yoga Instruction Courses to Learn in November 2024
- Top 10 Stress Management Courses to Learn in November 2024
- Top 10 Mindfulness Meditation Courses to Learn in November 2024
- Top 10 Life Coaching Courses to Learn in November 2024
- Top 10 Career Development Courses to Learn in November 2024
- Top 10 Relationship Building Courses to Learn in November 2024
- Top 10 Parenting Skills Courses to Learn in November 2024
- Top 10 Home Improvement Courses to Learn in November 2024
- Top 10 Gardening Courses to Learn in November 2024