Keras Deep Learning & Generative Adversarial Networks (GAN)
Keras Deep Learning & Generative Adversarial Networks (GAN), available at $69.99, has an average rating of 4.44, with 135 lectures, based on 8 reviews, and has 1128 subscribers.
You will learn about Generative Adversarial Networks (GAN) using Python with Keras Learn Deep Learning From the Scratch to Expert Level Python and Keras Generative Adversarial Networks and Deep Learning Keras Deep Learning & Generative Adversarial Networks (GAN) This course is ideal for individuals who are Deep Learning & Generative Adversarial Networks (GAN) From the Scratch to Expert Level. For all beginners who want to learn about Deep Learning and Generative Adversarial Networks It is particularly useful for Deep Learning & Generative Adversarial Networks (GAN) From the Scratch to Expert Level. For all beginners who want to learn about Deep Learning and Generative Adversarial Networks.
Enroll now: Keras Deep Learning & Generative Adversarial Networks (GAN)
Summary
Title: Keras Deep Learning & Generative Adversarial Networks (GAN)
Price: $69.99
Average Rating: 4.44
Number of Lectures: 135
Number of Published Lectures: 135
Number of Curriculum Items: 135
Number of Published Curriculum Objects: 135
Original Price: $109.99
Quality Status: approved
Status: Live
What You Will Learn
- Generative Adversarial Networks (GAN) using Python with Keras
- Learn Deep Learning From the Scratch to Expert Level
- Python and Keras Generative Adversarial Networks and Deep Learning
- Keras Deep Learning & Generative Adversarial Networks (GAN)
Who Should Attend
- Deep Learning & Generative Adversarial Networks (GAN) From the Scratch to Expert Level. For all beginners who want to learn about Deep Learning and Generative Adversarial Networks
Target Audiences
- Deep Learning & Generative Adversarial Networks (GAN) From the Scratch to Expert Level. For all beginners who want to learn about Deep Learning and Generative Adversarial Networks
Hi There!
Hello and welcome to my new course Deep Learning with Generative Adversarial Networks (GAN). This course is divided into two halves. In the first half we will deal with Deep Learning and Neural Networks and in the second half on top of that, we will continue with Generative Adversarial Networks or GAN or we can call it as ‘gan’. So lets see what are the topics that are included in each module. At first, the Deep Learning one..
As you already know the artificial intelligence domain is divided broadly into deep learning and machine learning. In-fact deep learning is machine learning itself but Deep learning with its deep neural networks and algorithms try to learn high-level features from data without human intervention. That makes deep learning the base of all future self intelligent systems.
I am starting from the very basic things to learn like learning the programming language basics and other supporting libraries at first and proceed with the core topic.
Let’s see what are the interesting topics included in this course. At first we will have an introductory theory session about Artificial Intelligence, Machine learning, Artificial Neurons based Deep Learning and Neural Networks.
After that, we are ready to proceed with preparing our computer for python coding by downloading and installing the anaconda package and will check and see if everything is installed fine. We will be using the browser based IDE called Jupyter notebook for our further coding exercises.
I know some of you may not be coming from a python based programming background. The next few sessions and examples will help you get the basic python programming skill to proceed with the sessions included in this course. The topics include Python assignment, flow-control, functions List and Tuples, Dictionaries, Functions etc.
Then we will start with learning the basics of the Python Numpy library which is used to adding support for large, multi-dimensional arrays and matrices, along with a large collection of classes and functions. Then we will learn the basics of matplotlib library which is a plotting library for Python for corresponding numerical expressions in NumPy. And finally the pandas library which is a software library written for the Python programming language for data manipulation and analysis.
After the basics, we will then install the deep learning libraries theano, tensorflow and the API for dealing with these called as Keras. We will be writing all our future codes in keras.
Then before we jump into deep learning, we will have an elaborate theory session about the basic Basic Structure of an Artificial Neuron and how they are combined to form an artificial Neural Network. Then we will see what exactly is an activation function, different types of most popular activation functions and the different scenarios we have to use each of them.
After that we will see about the loss function, the different types of popular loss functions and the different scenarios we have to use each of them.
Like the Activation and loss functions, we have optimizers which will optimize the neural network based on the training feedback. We will also see the details about most popular optimizers and how to decide in which scenarios we have to use each of them.
Then finally we will discuss about the most popular deep learning neural network types and their basic structure and use cases.
Further the course is divided into exactly two halves. The first half is about creating deep learning multi-layer neural network models for text based dataset and the second half about creating convolutional neural networks for image based dataset.
In Text based simple feed forward multi-layer neural network model we will start with a regression model to predict house prices of King County USA. The first step will be to Fetch and Load Dataset from the kaggle website into our program.
Then as the second step, we will do an EDA or an Exploratory Data Analysis of the loaded data and we will then prepare the data for giving it into our deep learning model. Then we will define the Keras Deep Learning Model.
Once we define the model, we will then compile the model and later we will fit our dataset into the compiled model and wait for the training to complete. After training, the training history and metrics like accuracy, loss etc can be evaluated and visualized using matplotlib.
Finally we have our already trained model. We will try doing a prediction of the king county real estate price using our deep learning model and evaluate the results.
That was a text based regression model. Now we will proceed with a text based binary classification model. We will be using a derived version of Heart Disease Data Set from the UCI Machine Learning Repository. Our aim is to predict if a person will be having heart disease or not from the learning achieved from this dataset. The same steps repeat here also.
The first step will be to Fetch and Load Dataset into our program.
Then as the second step, we will do an EDA or an Exploratory Data Analysis of the loaded data and we will then prepare the data for giving it into our deep learning model. Then we will define the Keras Deep Learning Model.
Once we define the model, we will then compile the model and later we will fit our dataset into the compiled model and wait for the training to complete. After training, the training history and metrics like accuracy, loss etc can be evaluated and visualized using matplotlib.
Finally we have our already trained model. We will try doing a prediction for heart disease using our deep learning model and evaluate the results.
After the text based binary classification model. Now we will proceed with a text based multi class classification model. We will be using the Red Wine Quality Data Set from the kaggle website. Our aim is to predict the multiple categories in which a redwine sample can be placed from the learning achieved from this dataset. The same steps repeat here also.
The first step will be to Fetch and Load Dataset into our program.
Then as the second step, we will do an EDA or an Exploratory Data Analysis of the loaded data and we will then prepare the data for giving it into our deep learning model. Then we will define the Keras Deep Learning Model.
Once we define the model, we will then compile the model and later we will fit our dataset into the compiled model and wait for the training to complete. After training, the training history and metrics like accuracy, loss etc can be evaluated and visualized using matplotlib.
Finally we have our already trained model. We will try doing a prediction for wine quality with a new set of data and then evaluate the categorical results.
We may be spending much time, resources and efforts to train a deep learning model. We will learn about the techniques to save an already trained model. This process is called serialization. We will at first serialize a model. Then later load it in another program and do the prediction without having to repeat the training.
That was about text based data. We will now proceed with image based data. In the preliminary session we will have an introduction to Digital Image Basics in which we learn about the composition and structure of a digital image.
Then we will learn about Basic Image Processing using Keras Functions. There are many classes and functions that help with pre processing an image in the Keras library api. We will learn about the most popular and useful functions one by one.
Another important and useful image processing function in keras is Image Augmentation in which slightly different versions of images are automatically created during training. We will learn about single image augmentation, augmentation of images within a directory structure and also data frame image augmentation.
Then another theory session about the basics of a Convolutional neural network or CNN. We will learn how the basic CNN layers like convolution layer, the pooling layer and the fully connected layer works.
There are concepts like Stride Padding and Flattening in convolution for image processing. We will learn them also one by one.
Now we are all set to start with our CNN Model. We will be designing a model that can classify 5 different types of flowers if provided with an image of a flower in any of these categories. We will be at first downloading the dataset from the kaggle website. Then the first step will be to Fetch and Load this Dataset from our computer into our program.
Then as the second step, we have to split this dataset manually for training and then later testing the model. We will arrange them into training and testing folders with each class labelled in separate folders.
Then we will define the Keras Deep Learning Model. Once we define the model, we will then compile the model and later we will fit our dataset into the compiled model and wait for the training to complete. After training, the training history and metrics like accuracy, loss etc can be evaluated and visualized using matplotlib.
Finally we have our already trained model. We will try doing a prediction for five different types of flowers with a new set of image data and then evaluate the categorical results.
There are many techniques which we can use to improve the quality of a model. Especially an image based model. The most popular techniques are doing dropout regularization of the model.
The next technique is doing the optimization and adjustment of the padding and also the filters in the convolution layers.
And finally optimization using image augmentation. We will tweak different augmentation options in this session.
Doing these optimization techniques manually one by one and comparing results is a very tedious task. So we will be using a technique called Hyper parameter tuning in which the keras library itself will switch different optimization techniques that we specify and will report and compare the results without we having to interfere in it.
Even though these techniques and creation of a model from the scratch is fun. Its very time consuming and may take ages if you are planning to design a large model. In this situation a technique called transfer learning can help us.
We will take the world renounced, state of the art, most popular pre-trained deep learning models designed by experts and we will transfer the learning into our model so that we can make use of the architecture of that model into our custom model that we are building.
The popular state of the art model architectures that we are going to use are the VGG16, VGG19 designed by deep learning experts from the University of Oxford and also ResNet50 created in ImageNet challenge to address the vanishing gradient problem.
We will at first download these models using keras and will try simple predictions using these pre-trained models. Later we will try the network training for our flower dataset itself using the VGG16. we will make few changes in the model to incorporate our dataset into it. Since the network architecture is not that simple, in our computer it will take a lot of time to complete the training.
So instead of CPU, we have to use a GPU to enhance parallel processing. We will be using a cloud based Free GPU service provied by goggle called Google Colab. At first we will try training with VGG16 in google colab. We will prepare, zip and upload the dataset into google colab. Then we will extract it using linux comands and then do the training. The training is almost ten times faster compared to the local computer. Once we have the trained model we will serialize the model and will do the prediction.
The same procedure will be repeated for VGG19 and also for ResNet.
And after having enough knowledge about Deep Learning, we will then move on with our Generative Adversarial Network or GAN
At first we will have an introduction to GAN. We will discuss about the the basic things inside the GAN, the two different types of networks called the Generator and the Discriminator. And then we will attempt to do a simple Transpose Convolution using a grayscale image.
Transpose convolution is the opposite of the convolution operation that we had in Deep Learning and in the next session, we will once again have a thorough discussion about the Generator and the Discriminator mechanism inside a GAN. After that we will try to implement a Fully Connected, simple GAN using MNIST dataset. We will have a step by step approach in creating this fully connected GAN or fully connected GAN.
At first we will be loading the MNIST dataset. Then we will proceed with defining the Generator function. Once we have the Generator function, we will then define the Discriminator function. And then we will combine this Generator and Discriminator models for our fully connected GAN so that we will have a composite model. And then this model will be compiled so that it can be used for training. After the compilation, we will proceed with training. We will train the discriminator at first and then we will proceed with the generator training. We will also define functions so that during the training of this fully connected gan, we will save the log at regular intervals and also we will plot the graph. Along with the graph,the image generated after each batch of training. And once the training is complete, we will
see how it performed. We will see the generated images by this fully connected GAN and also we will save this model
so that we can use it for future image generation without having to train it again. We will see how we can do that.
Once we completed the fully connected GAN, we will then proceed with a more advanced Deep Convoluted GAN or DCGAN. For DCGAN also, we will discuss what is a DCGAN. What’s the difference between DCGAN with a Fully Connected GAN. Then we will try to implement the Deep Convolution GAN or DCGAN At first we will define the Generator function then we will define the Discriminator function. After that we will combine this generator and discriminator models and we will have a composite model. We will then compile this model. Once the compilation is complete, we will proceed with training the model. Since deep convoluted GAN or DCGAN is complex than a fully connected GAN, it will take much time in training it.
So we will move the training from our local computer to our Google Colab using GPU. we will train the model and we will use that to generate the MNIST hand written digits.
We will use this same deep convolution GAN with other dataset also like the MNIST Fashion dataset. We will train the model using GPU and we will generate images. Then we will have the MNIST hand written digits and also the MNIST Fashion dataset are all simple grayscale images.
We will try with color images also. We will be having the CIFAR-10 dataset. We will define the generator at first. Because its a color dataset, we need to adjust the model. So we will define the generator, then we will define the discriminator once again. Then we will proceed with training using the CIFAR-10 dataset. The training will be done using Google Colab GPU and then we will try to generate images using that trained model.
Then we will have a brief discussion about Conditional Generative Adversarial Networks or Conditional GAN. We will compare the normal Vanilla GAN with Conditional GAN. What’s the difference between a normal GAN and a Conditional GAN. And then we will proceed with implementing the Conditional GAN, which is a bit different from our normal GAN. We will at first define the basic generator function and then we will have a Label Embedding for this generator. Then we will define the basic discriminator function and also we will have the label embedding for this discriminator function. Then we will combine this generator and discriminator. We will combine and we will compile. After that, we will proceed with
training the GAN model using our local computer. And we will try to display the generated images. And then we will use the same code, upload it to google colab and do the training with google Colab. And that was for the MNIST hand written digits dataset. We will then proceed with training the Conditional GAN using the MNIST Fashion dataset. The same dataset that we used for our fully connected and also deep convoluted GAN. Once we completed the training, we will try to generate images using the conditional GAN for our Fashion MNIST dataset. And in the next session, we will discuss about the other popular types of GAN and also I will provide you with a git repository which was shared by a deep learning and machine learning expert. I will share that link so that you can try those exercises by yourself and also I will teach you how you can fork that repository into your personal git repository so that you can try the code, change the code
and see how it performs.
And that’s all about the topics which are currently included in this quick course. The code, images, models and weights used in this course has been uploaded and shared in a folder. I will include the link to download them in the last session or the resource section of this course. You are free to use the code in your projects with no questions asked.
Also after completing this course, you will be provided with a course completion certificate which will add value to your portfolio.
So that’s all for now, see you soon in the class room. Happy learning and have a great time.
Course Curriculum
Chapter 1: Introduction
Lecture 1: Course Introduction and Table of Contents
Chapter 2: Introduction to AI and Machine Learning
Lecture 1: Introduction to AI and Machine Learning
Chapter 3: Introduction to Deep learning and Neural Networks
Lecture 1: Introduction to Deep learning and Neural Networks
Chapter 4: Setting up Computer – Installing Anaconda
Lecture 1: Setting up Computer – Installing Anaconda
Chapter 5: Python Basics – Flow Control
Lecture 1: Python Basics – Flow Control – Part 1
Lecture 2: Python Basics – Flow Control – Part 2
Chapter 6: Python Basics – List and Tuples
Lecture 1: Python Basics – List and Tuples
Chapter 7: Python Basics – Dictionary and Functions
Lecture 1: Python Basics – Dictionary and Functions – part 1
Lecture 2: Python Basics – Dictionary and Functions – part 2
Chapter 8: Numpy Basics
Lecture 1: Numpy Basics – Part 1
Lecture 2: Numpy Basics – Part 2
Chapter 9: Matplotlib Basics
Lecture 1: Matplotlib Basics – part 1
Lecture 2: Matplotlib Basics – part 2
Chapter 10: Pandas Basics
Lecture 1: Pandas Basics – Part 1
Lecture 2: Pandas Basics – Part 2
Chapter 11: Installing Deep Learning Libraries
Lecture 1: Installing Deep Learning Libraries
Chapter 12: Basic Structure of Artificial Neuron and Neural Network
Lecture 1: Basic Structure of Artificial Neuron and Neural Network
Chapter 13: Activation Functions Introduction
Lecture 1: Activation Functions Introduction
Chapter 14: Popular Types of Activation Functions
Lecture 1: Popular Types of Activation Functions
Chapter 15: Popular Types of Loss Functions
Lecture 1: Popular Types of Loss Functions
Chapter 16: Popular Optimizers
Lecture 1: Popular Optimizers
Chapter 17: Popular Neural Network Types
Lecture 1: Popular Neural Network Types
Chapter 18: King County House Sales Regression Model – Step 1 Fetch and Load Dataset
Lecture 1: King County House Sales Regression Model – Step 1 Fetch and Load Dataset
Chapter 19: Step 2 and 3 EDA and Data Prepration
Lecture 1: Step 2 and 3 EDA and Data Preparation – Part 1
Lecture 2: Step 2 and 3 EDA and Data Preparation – Part 2
Chapter 20: Step 4 Defining the Keras Model
Lecture 1: Step 4 Defining the Keras Model – Part 1
Lecture 2: Step 4 Defining the Keras Model – Part 2
Chapter 21: Step 5 and 6 Compile and Fit Model
Lecture 1: Step 5 and 6 Compile and Fit Model
Chapter 22: Step 7 Visualize Training and Metrics
Lecture 1: Step 7 Visualize Training and Metrics
Chapter 23: Step 8 Prediction Using the Model
Lecture 1: Step 8 Prediction Using the Model
Chapter 24: Heart Disease Binary Classification Model – Introduction
Lecture 1: Heart Disease Binary Classification Model – Introduction
Chapter 25: Step 1 – Fetch and Load Data
Lecture 1: Step 1 – Fetch and Load Data
Chapter 26: Step 2 and 3 – EDA and Data Preparation
Lecture 1: Step 2 and 3 – EDA and Data Preparation – Part 1
Lecture 2: Step 2 and 3 – EDA and Data Preparation – Part 2
Chapter 27: Step 4 – Defining the model
Lecture 1: Step 4 – Defining the model
Chapter 28: Step 5 – Compile Fit and Plot the Model
Lecture 1: Step 5 – Compile Fit and Plot the Model
Chapter 29: Step 5 – Predicting Heart Disease using Model
Lecture 1: Step 5 – Predicting Heart Disease using Model
Chapter 30: Step 6 – Testing and Evaluating Heart Disease Model
Lecture 1: Step 6 – Testing and Evaluating Heart Disease Model – Part 1
Lecture 2: Step 6 – Testing and Evaluating Heart Disease Model – Part 2
Chapter 31: Redwine Quality MultiClass Classification Model – Introduction
Lecture 1: Redwine Quality MultiClass Classification Model – Introduction
Chapter 32: Step1 – Fetch and Load Data
Lecture 1: Step1 – Fetch and Load Data
Chapter 33: Step 2 – EDA and Data Visualization
Lecture 1: Step 2 – EDA and Data Visualization
Chapter 34: Step 3 – Defining the Model
Lecture 1: Step 3 – Defining the Model
Chapter 35: Step 4 – Compile Fit and Plot the Model
Lecture 1: Step 4 – Compile Fit and Plot the Model
Chapter 36: Step 5 – Predicting Wine Quality using Model
Lecture 1: Step 5 – Predicting Wine Quality using Model
Chapter 37: Serialize and Save Trained Model for Later Usage
Lecture 1: Serialize and Save Trained Model for Later Usage
Chapter 38: Digital Image Basics
Lecture 1: Digital Image Basics
Chapter 39: Basic Image Processing using Keras Functions
Lecture 1: Basic Image Processing using Keras Functions – Part 1
Lecture 2: Basic Image Processing using Keras Functions – Part 2
Lecture 3: Basic Image Processing using Keras Functions – Part 3
Chapter 40: Keras Single Image Augmentation
Lecture 1: Keras Single Image Augmentation – Part 1
Lecture 2: Keras Single Image Augmentation – Part 2
Chapter 41: Keras Directory Image Augmentation
Lecture 1: Keras Directory Image Augmentation
Chapter 42: Keras Data Frame Augmentation
Lecture 1: Keras Data Frame Augmentation
Chapter 43: CNN Basics
Lecture 1: CNN Basics
Chapter 44: Stride Padding and Flattening Concepts of CNN
Lecture 1: Stride Padding and Flattening Concepts of CNN
Instructors
-
Abhilash Nelson
Computer Engineering Master & Senior Programmer at Dubai
Rating Distribution
- 1 stars: 0 votes
- 2 stars: 0 votes
- 3 stars: 2 votes
- 4 stars: 2 votes
- 5 stars: 4 votes
Frequently Asked Questions
How long do I have access to the course materials?
You can view and review the lecture materials indefinitely, like an on-demand channel.
Can I take my courses with me wherever I go?
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!
You may also like
- Top 10 Language Learning Courses to Learn in November 2024
- Top 10 Video Editing Courses to Learn in November 2024
- Top 10 Music Production Courses to Learn in November 2024
- Top 10 Animation Courses to Learn in November 2024
- Top 10 Digital Illustration Courses to Learn in November 2024
- Top 10 Renewable Energy Courses to Learn in November 2024
- Top 10 Sustainable Living Courses to Learn in November 2024
- Top 10 Ethical AI Courses to Learn in November 2024
- Top 10 Cybersecurity Fundamentals Courses to Learn in November 2024
- Top 10 Smart Home Technology Courses to Learn in November 2024
- Top 10 Holistic Health Courses to Learn in November 2024
- Top 10 Nutrition And Diet Planning Courses to Learn in November 2024
- Top 10 Yoga Instruction Courses to Learn in November 2024
- Top 10 Stress Management Courses to Learn in November 2024
- Top 10 Mindfulness Meditation Courses to Learn in November 2024
- Top 10 Life Coaching Courses to Learn in November 2024
- Top 10 Career Development Courses to Learn in November 2024
- Top 10 Relationship Building Courses to Learn in November 2024
- Top 10 Parenting Skills Courses to Learn in November 2024
- Top 10 Home Improvement Courses to Learn in November 2024