-
Notifications
You must be signed in to change notification settings - Fork 79
12 Chapter : Deep Learning Track
Deep learning is the machine learning technique behind the most exciting capabilities in diverse areas like robotics, natural language processing, image recognition, and artificial intelligence. In this unit, you’ll learn the mathematical foundations of deep learning and how to implement deep networks for some common problem domains. You'll learn how to enter mostly unstructured, raw data into a system and have it automatically identify relationships and patterns, and use that information to make predictions. You’ll also gain hands-on, practical knowledge of how to use deep learning with Keras 2.0, a cutting edge library for deep learning in Python. As you work through the Deep Learning Track, you’ll develop ideas for your second capstone project and focus on one proposal at the end of the unit.
Note: Keep in mind that the new techniques introduced in this unit may apply to your second capstone project.
Three ideas for your capstone project 2 Your capstone project 2 Proposal
Please email [email protected] to request your Lynda login information.
- Brainstorm three ideas for Capstone Project 2
- Gain an overview of deep learning including understanding:
- The principles of neural networks and Deep Networks, including algorithms such as backpropagation
- How to use implement basic Deep Learning using Python libraries such as Keras and Tensorflow
- Common Deep Learning architectures such as CNNs for video and images, and RNNs for sequential data.
- How Transfer learning can save a lot of time and computational power.
- Neural Networks A machine learning algorithm loosely based on the brain
- Deep Neural Networks An evolution of neural networks with many layers and complex architectures
- Recurrent Neural Networks (RNNs) A specific Deep Neural Network architecture designed for sequential data such as speech and time series
- Convolutional Neural Networks (CNNs) A specific Deep Neural Network architecture designed for unstructured data such as images and video
- Transfer Learning: Techniques designed to use a pre-trained Neural Network to jumpstart the training of another, thereby reducing time and resources required for training
- Backpropagation: A common algorithm for training a neural network
- Keep in mind the basics of supervised and unsupervised learning.
- Keep in mind Lessons learned from the first capstone project. The second project is an opportunity to improve on all the previous skills and apply new ones.
A slightly more technical, but still gentle introduction to the principles behind neural networks, including backpropagation.
Most explanations of deep learning are tough to understand if you aren't fluent in math and computers, or they make it sound like magic. This is a description of deep neural networks with no fancy math and no computer jargon.
This section covers a set of Python tools and libraries that make building and tuning deep learning networks as easy and painless as possible.
-
Interactive Exercises: Deep Learning in Python: Learn about the importance of deep learning in diverse areas of innovation, specifically how to use deep learning with Keras 2.0, the latest version of a cutting-edge library for deep learning in Python.
-
Interactive Exercises: Building and Deploying Applications Using TensorFlow: TensorFlow is an open source software library for numerical computation using data flow graphs. It was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research.
-
Article: TensorFlow or Keras? - As a (relative) newbie, you might be confused as to which library to use when you’re implementing your deep learning ideas. This short article gives you better insight about which one to choose and when to choose it.
-
Article: Practical Advice for Implementing Deep Networks - Deep learning has a lot of knobs to tune: activation functions, learning rates, architecture, optimizer, and output layer. It can take a while using trial and error to get all of those right. This blog post presents some practical tips for training deep neural networks based on the author’s experiences (rooted mainly in TensorFlow).
As you may have gathered, deep networks can have an infinite number of potential architectures and parameters to tune. How do you know which ones work? Fortunately, experienced data scientists have identified a few practical architectures for the most common types of problems and data that you might encounter. In this section, we’ll cover some of them, with tutorials on how to implement and tune them in practice.
Convolutional neural networks (CNNs) underly many applications around image recognition. This tutorial provides an introductory overview on how CNNs work, and introduces the concept of the convolution in deep learning.
This hands-on tutorial shows you how to identify digits in the MNIST dataset using a convolutional neural network. MNIST is the “Hello World” of deep learning, whereas CNNs are at the core of most image recognition tasks today.
Recurrent neural networks (RNNs) are a specific neural network architecture that deals with unstructured data, including text, speech, and video. They’re at the frontier of text and NLP technologies today. This tutorial provides a high level overview of this methodology.
This hands-on tutorial will show you how to implement RNNs using Tensorflow to predict the next event in a time series dataset. This can be applied to any kind of sequential data.
Transfer learning includes techniques to allow the learning accomplished by one neural network model to be used in another, thereby saving a lot of time and computational power. They allow data scientists to use complex, published architectures, that have already been trained on large amounts of data using immense GPU-based computational power and apply them directly to their own datasets.
This amazing talk by Gene Kogan describes a new technology called Generative Adversarial Networks (GANs), which are composed of two networks working together competitively to produce realistic images, video or other media. For anyone who’s interested in synthesizing realistic media, this talk is a must-see.
A repository of tutorials on how to use TensorFlow for various applications. Covers everything from the basics, through specific implementations of various neural network architectures, all the way up to software development in TensorFlow.