Creative Applications of Deep Learning with TensorFlow - Enrollment Closed
Parag K. MITAL (US) is an artist and interdisciplinary researcher obsessed with the nature of information, representation, and attention. Using film, eye-tracking, EEG, and fMRI recordings, he has worked on computational models of audiovisual perception from the perspective of both robots and humans, often revealing the disjunct between the two, through generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora. Through this process, he balances his scientific and arts practice, with both reflecting on each other: the science driving the theories, and the artwork re-defining the questions asked within the research. His work has been exhibited internationally including the Prix Ars Electronica, ACM Multimedia, Victoria & Albert Museum, London’s Science Museum, Oberhausen Short Film Festival, and the British Film Institute, and featured in FastCompany, BBC, NYTimes, CreativeApplications.Net, and CreateDigitalMotion.
Harmony Jiroudek is active in the fields of vocal performance, arts education technology, and instructional design.
Jiroudek, an accomplished mezzo-soprano, has participated in several American and world premiere performances, including Michael Gordon’s What to Wear, George Aperghis’ Sextuor: L’Origine des espèces, and David Rosenboom’s Attunement. Other noteworthy performances include: Igor Stravinsky’s Les Noces and Mavra, Steve Reich’s Music for 18, Bruno Maderna’s Satyricon, J.S. Bach’s Cantata 170 with guest violinist, Elizabeth Blumenstock.
She received a Bachelor of Fine Arts and a Master of Fine Arts in vocal performance from California Institute of the Arts, where she also served as voice faculty from 2012-2014
This program is for anyone curious about how Deep Learning, AI, or Machine Learning can engage with their own ideas or practice—whether you are a traditional computer scientist, psychologist, journalist, creative coder or just curious, no machine learning background is assumed. We cover basic fundamentals all the way through state-of-the-art Deep Learning applications. Everything is built using Python and Tensorflow, and applied through the guided homework assignments. Unlike other courses which focus solely on theory and have very little practical guidance for understanding Deep Learning, this course is entirely application-led and taught inside the Python console with real-world examples and code.
The background you learn in this program will allow you to apply what you learn to other frameworks such as Keras, Caffe, or Theano with greater ease while having a strong foundation in the core components of Deep Learning. We take an approach to learning that requires you to problem solve, applying your work to a creative problem, and interacting with the results of your work with your peers. We also build everything from scratch in TensorFlow and cover techniques for regression, classification, image preprocessing, audio signal processing, audio classification, image synthesis w/ generative networks, recurrent neural network modeling of text, midi, and audio, handwriting modeling and synthesis, and how to train and deploy models in the cloud on Linux systems.
- TensorFlow Modeling: Create, train, and deploy TensorFlow models
- Generative Modeling: Apply generative models of image, audio, handwriting, and text using various techniques, such as dilated convolution, mixture density networks, generative adversarial networks, recurrent neural networks, or attention-based recurrent neural networks.
- Representation Learning: Learn, inspect, and creatively apply representations from deep layers of a pre-trained model to applications such as Deep Dream, Style Net, or Neural Doodle.
- Session 1: Introduction to Tensorflow
- Session 2: Training A Network W/ Tensorflow
- Session 3: Unsupervised And Supervised Learning
- Session 4: Visualizing And Hallucinating Representations
- Session 5: Generative Models
- Session 1: Cloud Computing, Deploying, TensorBoard
- Session 2: Mixture Density Networks
- Session 3: Modeling Attention with RNNs, DRAW
- Session 4: Image-to-Image Translation with GANs
- Session 1: Modeling Music and Art: Google Brain’s Magenta Lab
- Session 2: Modeling Language: Natural Language Processing
- Session 3: Autoregressive Image Modeling w/ PixelCNN
- Session 4: Modeling Audio w/ Wavenet and NSynth
- Create an animation of Deep Dream, Style Net, and a combination of the two using Inception, VGG, or I2V networks
- Generate an entirely fake publication using any combination of handwriting synthesis, image synthesis, and text generation
- Explore the use of reinforcement learning and RNNs to generate music using Google Brain's Magenta libraries
Prerequisites: Some programming experience with Python or similar, e.g. MATLAB, Octave, C/C++, Java, or Processing. OSX or Linux environments preferred, but Windows users are still supported via "Virtual Machine" and "Docker" which emulates a Linux OS. Some background with Terminal/Command Line operations. Python 3+ environment (Python 2 users can easily install a new environment for Python 3).
- A verified Specialist Certificate that prove you completed the Program and mastered the subject.*
- A verified course Certificate for each individual course you complete in the program.*
* Each certificate earned is endorsed by Kadenze and the offering institution(s).
Price: $500 USD
Featuring content from in collaboration with Google Magenta and NVIDIA
TensorFlow logo and any related marks are trademarks of Google Inc.
Why join a Program?
Becoming a specialist in a subject requires a highly tuned learning experience connecting multiple related courses. Programs unlock exclusive content that helps you develop a deep understanding of your subject. From your first course to your final summative assessment, our thoughtfully curated curriculum enables you to demonstrate your newly acquired skills.