Intro to Deep Learning and Generative Models Course
4.0
(2)
23 learners
What you'll learn
This course includes
- 40.3 hours of video
- Certificate of completion
- Access on mobile and TV
Course content
1 modules • 171 lessons • 40.3 hours of video
Intro to Deep Learning and Generative Models Course
171 lessons
• 40.3 hours
Intro to Deep Learning and Generative Models Course
171 lessons
• 40.3 hours
- L1.0 Intro to Deep Learning, Course Introduction04:27
- L1.1.1 Course Overview Part 1: Motivation and Topics16:27
- L1.1.2 Course Overview Part 2: Organization (Optional)17:35
- L1.2 What is Machine Learning?17:43
- L1.3.1 Broad Categories of ML Part 1: Supervised Learning10:56
- L1.3.2 Broad Categories of ML Part 2: Unsupervised Learning07:30
- L1.3.3 Broad Categories of ML Part 3: Reinforcement Learning03:49
- L1.3.4 Broad Categories of ML Part 4: Special Cases of Supervised Learning10:46
- L1.4 The Supervised Learning Workflow17:46
- L1.5 Necessary Machine Learning Notation and Jargon22:02
- L1.6 About the Practical Aspects and Tools Used in This Course11:26
- Deep Learning News #1, Jan 27 202115:28
- L2.0 A Brief History of Deep Learning -- Lecture Overview02:57
- L2.1 Artificial Neurons16:49
- L2.2 Multilayer Networks15:11
- L2.3 The Origins of Deep Learning20:12
- L2.4 The Deep Learning Hardware & Software Landscape07:21
- L2.5 Current Trends in Deep Learning08:22
- L3.0 Perceptron Lecture Overview05:02
- L3.1 About Brains and Neurons12:51
- L3.2 The Perceptron Learning Rule31:39
- L3.3 Vectorization in Python14:55
- L3.4 Perceptron in Python using NumPy and PyTorch28:42
- L3.5 The Geometric Intuition Behind the Perceptron18:43
- Deep Learning News #2, Feb 6 202125:01
- L4.0 Linear Algebra for Deep Learning -- Lecture Overview02:11
- L4.1 Tensors in Deep Learning13:03
- L4.2 Tensors in PyTorch28:34
- L4.3 Vectors, Matrices, and Broadcasting16:15
- L4.4 Notational Conventions for Neural Networks11:53
- L4.5 A Fully Connected (Linear) Layer in PyTorch12:42
- L5.0 Gradient Descent -- Lecture Overview06:28
- L5.1 Online, Batch, and Minibatch Mode21:04
- L5.2 Relation Between Perceptron and Linear Regression05:21
- L5.3 An Iterative Training Algorithm for Linear Regression11:11
- L5.4 (Optional) Calculus Refresher I: Derivatives17:37
- L5.5 (Optional) Calculus Refresher II: Gradients17:34
- L5.6 Understanding Gradient Descent26:34
- L5.7 Training an Adaptive Linear Neuron (Adaline)06:43
- L5.8 Adaline Code Example33:27
- Deep Learning News #3, Feb 13 202120:25
- L6.0 Automatic Differentiation in PyTorch -- Lecture Overview04:09
- L6.1 Learning More About PyTorch15:47
- L6.2 Understanding Automatic Differentiation via Computation Graphs22:48
- L6.3 Automatic Differentiation in PyTorch -- Code Example09:03
- L6.4 Training ADALINE with PyTorch -- Code Example23:30
- L6.5 A Closer Look at the PyTorch API25:03
- L7.0 GPU resources & Google Colab19:17
- Deep Learning News #4, Feb 20 202128:09
- L8.0 Logistic Regression -- Lecture Overview06:28
- L8.1 Logistic Regression as a Single-Layer Neural Network09:15
- L8.2 Logistic Regression Loss Function12:57
- L8.3 Logistic Regression Loss Derivative and Training19:57
- L8.4 Logits and Cross Entropy06:48
- L8.5 Logistic Regression in PyTorch -- Code Example19:03
- L8.6 Multinomial Logistic Regression / Softmax Regression17:32
- L8.7.1 OneHot Encoding and Multi-category Cross Entropy15:35
- L8.7.2 OneHot Encoding and Multi-category Cross Entropy -- Code Example15:05
- L8.8 Softmax Regression Derivatives for Gradient Descent19:40
- L8.9 Softmax Regression -- Code Example Using PyTorch25:40
- Deep Learning News #5, Feb 27 202130:59
- L9.0 Multilayer Perceptrons -- Lecture Overview03:54
- L9.1 Multilayer Perceptron Architecture24:24
- L9.2 Nonlinear Activation Functions22:50
- L9.3.1 Multilayer Perceptron -- Code Example Part 1/3 (Slide Overview)10:00
- L9.3.2 Multilayer Perceptron in PyTorch -- Code Example Part 2/3 (Jupyter Notebook)08:31
- L9.3.3 Multilayer Perceptron in PyTorch -- Code Example Part 3/3 (Script Setup)13:37
- L9.4 Overfitting and Underfitting31:09
- L9.5.1 Cats & Dogs and Custom Data Loaders16:48
- L9.5.2 Custom DataLoaders in PyTorch --Code Example29:29
- Deep Learning News #6, Mar 7 202136:13
- L10.0 Regularization Methods for Neural Networks -- Lecture Overview11:09
- L10.1 Techniques for Reducing Overfitting12:17
- L10.2 Data Augmentation in PyTorch14:32
- L10.3 Early Stopping04:08
- L10.4 L2 Regularization for Neural Nets15:48
- L10.5.1 The Main Concept Behind Dropout11:08
- L10.5.2 Dropout Co-Adaptation Interpretation03:51
- L10.5.3 (Optional) Dropout Ensemble Interpretation09:11
- L10.5.4 Dropout in PyTorch12:04
- L11.0 Input Normalization and Weight Initialization -- Lecture Overview02:53
- L11.1 Input Normalization08:03
- L11.2 How BatchNorm Works15:14
- L11.3 BatchNorm in PyTorch -- Code Example08:45
- L11.4 Why BatchNorm Works23:38
- L11.5 Weight Initialization -- Why Do We Care?06:01
- L11.6 Xavier Glorot and Kaiming He Initialization12:22
- L11.7 Weight Initialization in PyTorch -- Code Example07:37
- Deep Learning News #7 Mar 13 202123:34
- L12.0: Improving Gradient Descent-based Optimization -- Lecture Overview06:19
- L12.1 Learning Rate Decay17:07
- L12.2 Learning Rate Schedulers in PyTorch14:38
- L12.3 SGD with Momentum09:05
- L12.4 Adam: Combining Adaptive Learning Rates and Momentum15:33
- L12.5 Choosing Different Optimizers in PyTorch06:01
- L12.6 Additional Topics and Research on Optimization Algorithms12:05
- L13.0 Introduction to Convolutional Networks -- Lecture Overview05:25
- L13.1 Common Applications of CNNs09:35
- L13.2 Challenges of Image Classification07:45
- L13.3 Convolutional Neural Network Basics18:40
- L13.4 Convolutional Filters and Weight-Sharing20:20
- L13.5 Cross-correlation vs. Convolution (Old)10:17
- L13.5 What's The Difference Between Cross-Correlation And Convolution?10:38
- Deep Learning News #8 Mar 20 202118:03
- L13.6 CNNs & Backpropagation05:54
- L13.7 CNN Architectures & AlexNet20:17
- L13.8 What a CNN Can See13:43
- L13.9.1 LeNet-5 in PyTorch13:12
- L13.9.2 Saving and Loading Models in PyTorch05:45
- L13.9.3 AlexNet in PyTorch15:16
- Deep Learning News #9, Mar 27 202128:10
- L14.0: Convolutional Neural Networks Architectures -- Lecture Overview06:18
- L14.1: Convolutions and Padding11:14
- L14.2: Spatial Dropout and BatchNorm06:46
- L14.3: Architecture Overview03:24
- L14.3.1.1 VGG16 Overview06:06
- L14.3.1.2 VGG16 in PyTorch -- Code Example15:52
- L14.3.2.1 ResNet Overview14:42
- L14.3.2.2 ResNet-34 in PyTorch -- Code Example18:48
- Deep Learning News #10, Apr 3 202120:55
- L14.4.1 Replacing Max-Pooling with Convolutional Layers08:19
- L14.4.2 All-Convolutional Network in PyTorch -- Code Example08:17
- L14.5 Convolutional Instead of Fully Connected Layers14:33
- L14.6.1 Transfer Learning07:39
- L14.6.2 Transfer Learning in PyTorch -- Code Example11:36
- L15.0: Introduction to Recurrent Neural Networks -- Lecture Overview03:59
- L15.1: Different Methods for Working With Text Data15:58
- L15.2 Sequence Modeling with RNNs13:40
- L15.3 Different Types of Sequence Modeling Tasks04:32
- L15.4 Backpropagation Through Time Overview09:34
- L15.5 Long Short-Term Memory16:58
- L15.6 RNNs for Classification: A Many-to-One Word RNN29:07
- L15.7 An RNN Sentiment Classifier in PyTorch40:00
- L16.0 Introduction to Autoencoders -- Lecture Overview04:45
- L16.1 Dimensionality Reduction09:40
- L16.2 A Fully-Connected Autoencoder16:35
- L16.3 Convolutional Autoencoders & Transposed Convolutions16:08
- L16.4 A Convolutional Autoencoder in PyTorch -- Code Example15:21
- L16.5 Other Types of Autoencoders05:34
- L17.0 Intro to Variational Autoencoders -- Lecture Overview03:16
- L17.1 Variational Autoencoder Overview05:24
- L17.2 Sampling from a Variational Autoencoder09:27
- L17.3 The Log-Var Trick07:35
- L17.4 Variational Autoencoder Loss Function12:16
- L17.5 A Variational Autoencoder for Handwritten Digits in PyTorch -- Code Example23:13
- L17.6 A Variational Autoencoder for Face Images in PyTorch -- Code Example10:06
- L17.7 VAE Latent Space Arithmetic in PyTorch -- Making People Smile (Code Example)11:54
- L18.0: Introduction to Generative Adversarial Networks -- Lecture Overview05:15
- L18.1: The Main Idea Behind GANs10:43
- L18.2: The GAN Objective26:26
- L18.3: Modifying the GAN Loss Function for Practical Use18:50
- L18.4: A GAN for Generating Handwritten Digits in PyTorch -- Code Example22:46
- L18.5: Tips and Tricks to Make GANs Work17:14
- L18.6: A DCGAN for Generating Face Images in PyTorch -- Code Example12:43
- L19.0 RNNs & Transformers for Sequence-to-Sequence Modeling -- Lecture Overview03:05
- L19.1 Sequence Generation with Word and Character RNNs17:44
- L19.2.1 Implementing a Character RNN in PyTorch (Concepts)09:20
- L19.2.2 Implementing a Character RNN in PyTorch --Code Example25:57
- L19.3 RNNs with an Attention Mechanism22:19
- L19.4.1 Using Attention Without the RNN -- A Basic Form of Self-Attention16:11
- L19.4.2 Self-Attention and Scaled Dot-Product Attention16:09
- L19.4.3 Multi-Head Attention07:37
- L19.5.1 The Transformer Architecture22:36
- L19.5.2.1 Some Popular Transformer Models: BERT, GPT, and BART -- Overview08:41
- L19.5.2.2 GPT-v1: Generative Pre-Trained Transformer09:54
- L19.5.2.3 BERT: Bidirectional Encoder Representations from Transformers18:31
- L19.5.2.4 GPT-v2: Language Models are Unsupervised Multitask Learners09:03
- L19.5.2.5 GPT-v3: Language Models are Few-Shot Learners06:41
- L19.5.2.6 BART: Combining Bidirectional and Auto-Regressive Transformers10:15
- L19.5.2.7: Closing Words -- The Recent Growth of Language Transformers06:10
- L19.6 DistilBert Movie Review Classifier in PyTorch -- Code Example17:58
