Lecture 15, Feb. 25th, 2016: Optimization I

In this lecture, we will discuss gradient optimization methods for neural networks (and deep models in general).

Please study the following material in preparation for the class:


Lecture 14, Feb. 22th, 2016: Regularization II, Ensemble Methods

In this lecture, we will have continue our discussion of regularization methods. We will particularly focus on ensemble methods and dropout.

Please study the following material in preparation for the class:

Slides from class


Lecture 12, Feb. 15th, 2016: Recurrent Neural Networks

In this lecture we will discuss general questions about RNNs and some of their applications

Please study the following material in preparation for class:


Lecture 11, Feb. 11th, 2016: Recurrent Neural Networks

More on Recurrent Neural Networks.

Please study the following additional material in preparation for class:

  • Chapter 10 of the Deep Learning Textbook, the rest of chapter (sections 4 to 7).
  • These slides and these slides from class lecture

Other relevant material:


Sequence windows

As was briefly discussed in class and mentioned on the getting started page, you will need to split up your idea in subsequences in order to train RNNs. To do so, I just added a new transformer to Fuel that does this for you (should be merged soon, but you can check out the branch or copy-paste the code if you want to use it right now).

You can use it as follows:

from fuel.datasets.youtube_audio import YouTubeAudio
from fuel.transformers.sequences import Window

data = YouTubeAudio('XqaJ2Ol5cC4')
stream = data.get_example_stream()

sequence_size = 10
windows = Window(1, sequence_size, sequence_size, True, stream)
for source, target in windows.get_epoch_iterator():
    train(source, target)