C5 - Sequence Models

In the fifth course of the Deep Learning Specialization, you will become familiar with sequence models and their exciting applications such as speech recognition, music synthesis, chatbots, machine translation, natural language processing (NLP), and more.

Sequence Models

In the fifth course of the Deep Learning Specialization, you will become familiar with sequence models and their exciting applications such as speech recognition, music synthesis, chatbots, machine translation, natural language processing (NLP), and more.

By the end, you will be able to build and train Recurrent Neural Networks (RNNs) and commonly-used variants such as GRUs and LSTMs; apply RNNs to Character-level Language Modeling; gain experience with natural language processing and Word Embeddings; and use HuggingFace tokenizers and transformer models to solve different NLP tasks such as NER and Question Answering.

The Deep Learning Specialization is a foundational program that will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. It provides a pathway for you to take the definitive step in the world of AI by helping you gain the knowledge and skills to level up your career.

SKILLS YOU WILL GAIN

  • Natural Language Processing
  • Long Short Term Memory (LSTM)
  • Gated Recurrent Unit (GRU)
  • Recurrent Neural Network
  • Attention Models

Week 1 - Recurrent Neural Networks

Discover recurrent neural networks, a type of model that performs extremely well on temporal data, and several of its variants, including LSTMs, GRUs and Bidirectional RNNs,

Learning Objectives

  • Define notation for building sequence models
  • Describe the architecture of a basic RNN
  • Identify the main components of an LSTM
  • Implement backpropagation through time for a basic RNN and an LSTM
  • Give examples of several types of RNN
  • Build a character-level text generation model using an RNN
  • Store text data for processing using an RNN
  • Sample novel sequences in an RNN
  • Explain the vanishing/exploding gradient problem in RNNs
  • Apply gradient clipping as a solution for exploding gradients
  • Describe the architecture of a GRU
  • Use a bidirectional RNN to take information from two points of a sequence
  • Stack multiple RNNs on top of each other to create a deep RNN
  • Use the flexible Functional API to create complex models
  • Generate your own jazz music with deep learning
  • Apply an LSTM to a music generation task

Week 2 - Natural Language Processing & Word Embeddings

Natural language processing with deep learning is a powerful combination. Using word vector representations and embedding layers, train recurrent neural networks with outstanding performance across a wide variety of applications, including sentiment analysis, named entity recognition and neural machine translation.

Learning Objectives

  • Explain how word embeddings capture relationships between words
  • Load pre-trained word vectors
  • Measure similarity between word vectors using cosine similarity
  • Use word embeddings to solve word analogy problems such as Man is to Woman as King is to __.
  • Reduce bias in word embeddings
  • Create an embedding layer in Keras with pre-trained word vectors
  • Describe how negative sampling learns word vectors more efficiently than other methods
  • Explain the advantages and disadvantages of the GloVe algorithm
  • Build a sentiment classifier using word embeddings
  • Build and train a more sophisticated classifier using an LSTM

Week 3 - Sequence Models & Attention Mechanism

Augment your sequence models using an attention mechanism, an algorithm that helps your model decide where to focus its attention given a sequence of inputs. Then, explore speech recognition and how to deal with audio data.

Learning Objectives

  • Describe a basic sequence-to-sequence model
  • Compare and contrast several different algorithms for language translation
  • Optimize beam search and analyze it for errors
  • Use beam search to identify likely translations
  • Apply BLEU score to machine-translated text
  • Implement an attention model
  • Train a trigger word detection model and make predictions
  • Synthesize and process audio recordings to create train/dev datasets
  • Structure a speech recognition project

Week 4 - Transformer Network

Learning Objectives

  • Create positional encodings to capture sequential relationships in data
  • Calculate scaled dot-product self-attention with word embeddings
  • Implement masked multi-head attention
  • Build and train a Transformer model
  • Fine-tune a pre-trained transformer model for Named Entity Recognition
  • Fine-tune a pre-trained transformer model for Question Answering
  • Implement a QA model in TensorFlow and PyTorch
  • Fine-tune a pre-trained transformer model to a custom dataset
  • Perform extractive Question Answering

W1 - Recurrent Neural Networks

Discover recurrent neural networks, a type of model that performs extremely well on temporal data, and several of its variants, including LSTMs, GRUs and Bidirectional RNNs

W2 - Natural Language Processing & Word Embeddings

Natural language processing with deep learning is a powerful combination. Using word vector representations and embedding layers, train recurrent neural networks with outstanding performance across a wide variety of applications, including sentiment analysis, named entity recognition and neural machine translation

W3 - Sequence Models & Attention Mechanism

Augment your sequence models using an attention mechanism, an algorithm that helps your model decide where to focus its attention given a sequence of inputs. Then, explore speech recognition and how to deal with audio data.

W4 - Transformer Network

Learn how to create Transformer Networks, perform self-attention, multi-head attention, implement NER, QA, and more in this comprehensive guide.

Last modified February 4, 2024: meta description on coursera (b2d9a0d)