Enrollment Closed
Starts Today
You can also start immediately after joining!
This exclusive course is part of the program:
Creative Applications of Deep Learning with TensorFlow - Enrollment Closed
Coming Soon
Go at your own pace
4 Sessions / 15 hours of work per session
Included w/ premium membership ($20/month)
Skill Level
Video Transcripts
English, Japanese, Spanish; Castilian, Russian, Chinese, Portuguese
Generative audio, deep generative networks, generative adversarial networks, sketch to photo, neural doodle, style net
Open for Enrollment

Creative Applications of Deep Learning with TensorFlow II

Coming Soon

Would you like to enroll?

Enrollment for this course has closed. But you can enroll in a future offering (please select)

Enrollment has closed

Go at your own pace
4 Sessions / 15 hours of work per session
Included w/ premium membership ($20/month)
Skill Level
Video Transcripts
English, Japanese, Spanish; Castilian, Russian, Chinese, Portuguese
Generative audio, deep generative networks, generative adversarial networks, sketch to photo, neural doodle, style net
Course Sponsor

Filmed with exclusive content featuring Google Magenta

TensorFlow logo and any related marks are trademarks of Google Inc.

Course Description

This course extends the material from the first course on Creative Applications of Deep Learning, providing an updated landscape on the state of the art techniques in recurrent neural networks. We begin by recapping what we've done up until now and show how to extend our practice to the cloud where we can make use of much better hardware including state-of-the- art GPU clusters. We'll also see how the models we train can be deployed for production environments. The techniques learned here will give us a much stronger basis for developing even more advanced algorithms in the final course of the program. We then move on to some state-of-the-art developments in Deep Learning, including adding recurrent networks to a variational autoencoder in order to learn where to look and write. We also look at how to use neural networks to model parameterized distributions using a mixture density network. Finally, we look at a recent development in Generative Adversarial Networks capable of learning how to translate unpaired image collections so that each collection looks like the other one. Along the way, we develop a firm understanding in theory and code about some of the components in each of these architectures that make them possible.


This course is in adaptive mode and is open for enrollment. Learn more about adaptive courses here.

Session 1: Cloud Computing, Deploying, TensorBoard (March 3, 2021)
This session recaps the techniques learned in Course 1 and then goes on to describe how to setup an environment for learning on the cloud. Then shows how to use a simple RESTful API using a Python flask web application which could serve a pre-trained TensorFlow model. Finally, we look at creating summary operations and monitoring them with TensorBoard, TensorFlow's web UI for monitoring training and the TensorFlow graph. We see how to use this for monitoring images, doing hyperparameter search, and monitoring the optimization of the graph.
13 lessons
1. Overview
2. Cloud Computing with Amazon Web Services
3. Cloud Computing with Google Cloud
4. Cloud Computing with Nimbix
5. Setting Up a Development Environment
6. Docker Setup
7. NVidia-Docker GPU setup
8. Developing in the Cloud
9. Deploying a Web Service with Flask
10. TensorBoard
11. TensorFlow Collections
12. Hyperparameter/Model Search
13. Homework
Session 2: Mixture Density Networks (March 10, 2021)
This session covers a technique for predicting distributions of data called the mixture density network. We covers its importance and use case in the recurrent modeling of handwriting from x,y positions.
10 lessons
1. Overview
2. Mixture Density Network (MDN)
3. Gaussian Mixtures
4. MDN Model Code Implementation
5. A Note of Gaussian Covariance Types
6. A Note on Log Likelihoods
7. Training and Sampling the MDN Model
8. Training with Multiple Images
9. Training with Multiple Distributions
10. Homework
Session 3: Modeling Attention with RNNs, DRAW (March 17, 2021)
This session shows how to model one of the most fundamental aspects to intelligence: attention. We'll see how we can teach an autoencoding neural network where to look and where to decode. This will greatly simplify the amount of information that it needs to learn by conditioning on previous time steps, all while gaining an enormous amount of expressivity.
12 lessons
1. Overview
2. A Note on Visual Spacial Attention
3. Deep Recurrent Attentive Writer (DRAW) Overview
4. DRAW Details
5. DRAW Implementation - Encoder, Decoder, and Variational Layers
6. DRAW Implementation - Filter Banks
7. DRAW Implementation - Read Layer
8. DRAW Implementation - Write Layer
9. DRAW Implementation - Canvas and Recurrence
10. DRAW Implementation - Loss Functions
11. DRAW Training
12. Homework
Session 4: Image-to-Image Translation with GANs (March 24, 2021)
This session will touch on various aspects of NLP, natural language processing. We'll cover a wide range of techniques in this vast field including word representations, N-gram models, sequence-to-sequence (Seq2Seq) models, and attention, and see how we solve problems such as building a ChatBot, translate languages, or model various aspects of language.
14 lessons
1. Overview
2. CycleGAN
3. CycleGAN Implementation - Encoder
4. CycleGAN Implementation - Residual Blocks and Transformer
5. CycleGAN Implementation - Decoder
6. CycleGAN Implementation - PatchGAN
7. A Note on Receptive Field Sizes
8. CycleGAN Implementation - Discriminator
9. CycleGAN Implementation - Connecting the Pieces
10. CycleGAN Implementation - Loss Functions
11. CycleGAN Training
12. CycleGAN Notes
13. Homework
14. Course 2 Wrap up and Course 3 Upcoming
Learning Outcomes

Below you will find an overview of the Learning Outcomes you will achieve as you complete this course.

Instructors And Guests
What You Need to Take This Course

A short guide is provided here: https://github.com/pkmital/CADL to help with the installation of each of these components:

There is also an introductory session for those less familiar with python: https://github.com/pkmital/CADL/blob/master/session-0/session-0.ipynb

Additional Information

Some knowledge of basic python programming is assumed, including how to start a python session, working with jupyter (ipython) notebook (for homework submissions), numpy basics including how to manipulate arrays and images, how to draw images with matplotlib, and how to work with files using the os package. You should also have completed the first course in the CADL program before taking this second course.

Peer Assessment Code of Conduct: Part of what makes Kadenze a great place to learn is our community of students. While you are completing your Peer Assessments, we ask that you help us maintain the quality of our community. Please:

  • Be Polite. Show your fellow students courtesy. No one wants to feel attacked - ever. For this reason, insults, condescension, or abuse will not be tolerated.
  • Show Respect. Kadenze is a global community. Our students are from many different cultures and backgrounds. Please be patient, kind, and open-minded when discussing topics such as race, religion, gender, sexual orientation, or other potentially controversial subjects.
  • Post Appropriate Content. We believe that expression is a human right and we would never censor our students. With that in mind, please be sensitive of what you post in a Peer Assessment. Only post content where and when it is appropriate to do so.

Please understand that posts which violate this Code of Conduct harm our community and may be deleted or made invisible to other students by course moderators. Students who repeatedly break these rules may be removed from the course and/or may lose access to Kadenze.

Students with Disabilities: Students who have documented disabilities and who want to request accommodations should refer to the student help article via the Kadenze support center. Kadenze is committed to making sure that our site is accessible to everyone. Configure your accessibility settings in your Kadenze Account Settings.