C2 - Improving Deep Neural Networks

Dive deeper into neural networks: learn to fine-tune, optimize, and use TensorFlow for advanced deep learning applications.

https://www.coursera.org/learn/deep-neural-network

In the second course of the Deep Learning Specialization, you will open the deep learning black box to understand the processes that drive performance and generate good results systematically.

By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications; be able to use standard neural network techniques such as initialization, L2 and dropout regularization, hyperparameter tuning, batch normalization, and gradient checking; implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence; and implement a neural network in TensorFlow.

SKILLS YOU WILL GAIN

  • Tensorflow
  • DeepLearning
  • Mathematical Optimization
  • Hyperparameter Tuning

Week 1 - Practical Aspects of Deep Learning

Discover and experiment with a variety of different initialization methods, apply L2 regularization and dropout to avoid model overfitting, then apply gradient checking to identify errors in a fraud detection model.

Learning Objectives

  • Give examples of how different types of initializations can lead to different results
  • Examine the importance of initialization in complex neural networks
  • Explain the difference between train/dev/test sets
  • Diagnose the bias and variance issues in your model
  • Assess the right time and place for using regularization methods such as dropout or L2 regularization
  • Explain Vanishing and Exploding gradients and how to deal with them
  • Use gradient checking to verify the accuracy of your backpropagation implementation
  • Apply zeros initialization, random initialization, and He initialization
  • Apply regularization to a deep learning model

Week 2 - Optimization Algorithms

Develop your deep learning toolbox by adding more advanced optimizations, random minibatching, and learning rate decay scheduling to speed up your models.

Learning Objectives

  • Apply optimization methods such as (Stochastic) Gradient Descent, Momentum, RMSProp and Adam
  • Use random minibatches to accelerate convergence and improve optimization
  • Describe the benefits of learning rate decay and apply it to your optimization

Week 3 - Hyperparameter Tuning, Batch Normalization and Programming Frameworks

Explore TensorFlow, a deep learning framework that allows you to build neural networks quickly and easily, then train a neural network on a TensorFlow dataset.

Learning Objectives

  • Master the process of hyperparameter tuning
  • Describe softmax classification for multiple classes
  • Apply batch normalization to make your neural network more robust
  • Build a neural network in TensorFlow and train it on a TensorFlow dataset
  • Describe the purpose and operation of GradientTape
  • Use tf.Variable to modify the state of a variable
  • Apply TensorFlow decorators to speed up code
  • Explain the difference between a variable and a constant

W1 - Practical Aspects of Deep Learning

Explore practical aspects of deep learning: initialization methods, regularization techniques like dropout, optimization, and gradient checking for neural networks.

W2 - Optimization Algorithms

Advance your deep learning skills with optimization algorithms like Gradient Descent with Momentum, RMSprop, and Adam. Learn techniques like minibatching, learning rate decay, and understand the real issue with local optima in high-dimensional spaces.

W3 - Hyperparameter Tuning, Batch Normalization and Programming Frameworks

Explore TensorFlow for neural network development, hyperparameter tuning, and batch normalization. Master the art of tuning, normalize activations, and understand Softmax classification. Dive into deep learning frameworks and let TensorFlow handle backpropagation.

Last modified February 4, 2024: meta description on coursera (b2d9a0d)