Skip to content

Deep learning courses coursera #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 61 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
6f5630f
Update README.md
prateeshreddy Sep 22, 2020
f20a8e5
Updated files version 2020
prateeshreddy Sep 22, 2020
712d04e
Delete Building your Deep Neural Network - Step by Step.ipynb
prateeshreddy Sep 22, 2020
2669430
Delete Deep Neural Network - Application.ipynb
prateeshreddy Sep 22, 2020
49fa656
Delete Logistic Regression with a Neural Network mindset.ipynb
prateeshreddy Sep 22, 2020
f850bcf
Delete Planar data classification with one hidden layer.ipynb
prateeshreddy Sep 22, 2020
9887145
Delete Gradient Checking.ipynb
prateeshreddy Sep 23, 2020
4935aa5
Delete Initialization.ipynb
prateeshreddy Sep 23, 2020
0ef00a6
Delete Optimization methods.ipynb
prateeshreddy Sep 23, 2020
1c64941
Delete Regularization.ipynb
prateeshreddy Sep 23, 2020
416abb4
Delete Tensorflow Tutorial.ipynb
prateeshreddy Sep 23, 2020
451eaed
Add files via upload
prateeshreddy Sep 23, 2020
b6d66da
updated 2020 version Notebooks
prateeshreddy Sep 23, 2020
4296ef3
Delete Convolution model - Application - v1.ipynb
prateeshreddy Sep 23, 2020
2951720
Delete Convolution model - Step by Step - v2.ipynb
prateeshreddy Sep 23, 2020
2943ba6
Add files via upload
prateeshreddy Sep 23, 2020
84d362c
Delete Residual Networks - v1.ipynb
prateeshreddy Sep 23, 2020
3c86a2a
Add files via upload
prateeshreddy Sep 23, 2020
264607a
Delete Autonomous driving application - Car detection - v1.ipynb
prateeshreddy Sep 23, 2020
29c8e40
Add files via upload
prateeshreddy Sep 23, 2020
8596b75
Delete Face Recognition for the Happy House - v2.ipynb
prateeshreddy Sep 23, 2020
ce0b42b
Add files via upload
prateeshreddy Sep 23, 2020
feebf0b
Delete Art Generation with Neural Style Transfer - v1.ipynb
prateeshreddy Sep 23, 2020
2fde618
Add files via upload
prateeshreddy Sep 23, 2020
52fe22f
Delete Building a Recurrent Neural Network - Step by Step - v1.ipynb
prateeshreddy Sep 23, 2020
0d426a6
Add files via upload
prateeshreddy Sep 23, 2020
dd0c945
Delete Dinosaurus Island -- Character level language model final - v3…
prateeshreddy Sep 23, 2020
fa342f6
Add files via upload
prateeshreddy Sep 23, 2020
eac7342
Delete Jazz improvisation with LSTM - v1.ipynb
prateeshreddy Sep 23, 2020
9299184
Add files via upload
prateeshreddy Sep 23, 2020
45aa012
Delete Emojify - v2.ipynb
prateeshreddy Sep 23, 2020
a5f5034
Add files via upload
prateeshreddy Sep 23, 2020
6406ed7
Delete Operations on word vectors - v2.ipynb
prateeshreddy Sep 23, 2020
68fcc25
Add files via upload
prateeshreddy Sep 23, 2020
bbbd86b
Delete Neural machine translation with attention - v2.ipynb
prateeshreddy Sep 23, 2020
b0572b4
Add files via upload
prateeshreddy Sep 23, 2020
a01da31
Delete Trigger word detection - v1.ipynb
prateeshreddy Sep 23, 2020
5192102
updated 2020 version Notebooks
prateeshreddy Sep 23, 2020
fbf2c06
Create Week 1 Quiz - Introduction to deep learning.md
prateeshreddy Sep 23, 2020
8ae976f
Create Week 2 Quiz - Neural Network Basics.md
prateeshreddy Sep 23, 2020
4804189
Create Week 3 Quiz - Shallow Neural Networks.md
prateeshreddy Sep 23, 2020
15ea674
Create Week 4 Quiz - Key concepts on Deep Neural Networks.md
prateeshreddy Sep 23, 2020
f52fcff
Create Week 1 Quiz - Practical aspects of deep learning.md
prateeshreddy Sep 23, 2020
8162f01
Create Week 2 Quiz - Optimization algorithms.md
prateeshreddy Sep 23, 2020
3ee6521
Create Week 3 Quiz - Hyperparameter tuning, Batch Normalization, Prog…
prateeshreddy Sep 23, 2020
0d1274a
Create Week 1 Quiz - The basics of ConvNets.md
prateeshreddy Sep 23, 2020
c2b3499
Create Week 2 Quiz - Deep convolutional models.md
prateeshreddy Sep 23, 2020
3e39f19
Create Week 3 Quiz - Detection algorithms.md
prateeshreddy Sep 23, 2020
e510a32
Create Week 4 Quiz - Face recognition & Neural style transfer.md
prateeshreddy Sep 23, 2020
35cbeba
Create Week 1 Quiz - Recurrent Neural Networks.md
prateeshreddy Sep 23, 2020
b870dba
Create Week 3 Quiz - Sequence models & Attention mechanism.md
prateeshreddy Sep 23, 2020
0317121
Create Week 2 Quiz - Natural Language Processing & Word Embeddings.md
prateeshreddy Sep 23, 2020
c50d4e0
Add files via upload
prateeshreddy Sep 23, 2020
1e85ec1
Add files via upload
prateeshreddy Sep 23, 2020
f61f991
Add files via upload
prateeshreddy Sep 23, 2020
538c582
Update README.md
prateeshreddy Sep 23, 2020
6923245
Update README.md
prateeshreddy Sep 23, 2020
60436aa
Update README.md
prateeshreddy Sep 23, 2020
cd94479
Update README.md
prateeshreddy Sep 23, 2020
b21483d
Update README.md
prateeshreddy Sep 23, 2020
c0e08ed
Update README.md
prateeshreddy Oct 6, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
## Week 1 Quiz - The basics of ConvNets



1. What do you think applying this filter to a grayscale image will do?

- Detect horizontal edges

- > Detect vertical edges

- Detect 45 degree edges

- Detect image contrast

2. Suppose your input is a 300 by 300 color (RGB) image, and you are not using a convolutional network. If the first hidden layer has 100 neurons, each one fully connected to the input, how many parameters does this hidden layer have (including the bias parameters)?

- 9,000,001

- 9,000,100

- 27,000,001

- > 27,000,100

3. Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with 100 filters that are each 5x5. How many parameters does this hidden layer have (including the bias parameters)?

- 2501

- 2600

- 7500

- > 7600

4. You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7x7, using a stride of 2 and no padding. What is the output volume?

16x16x32

29x29x16

> 29x29x32

16x16x16

5. You have an input volume that is 15x15x8, and pad it using “pad=2.” What is the dimension of the resulting volume (after padding)?

19x19x12

17x17x10

> 19x19x8

17x17x8

6. You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7x7, and stride of 1. You want to use a “same” convolution. What is the padding?

1

2

> 3

7

7. You have an input volume that is 32x32x16, and apply max pooling with a stride of 2 and a filter size of 2. What is the output volume?

15x15x16

> 16x16x16

32x32x8

16x16x8

8. Because pooling layers do not have parameters, they do not affect the backpropagation (derivatives) calculation.

True

> False

9. In lecture we talked about “parameter sharing” as a benefit of using convolutional networks. Which of the following statements about parameter sharing in ConvNets are true? (Check all that apply.)

It allows parameters learned for one task to be shared even for a different task (transfer learning).

> It reduces the total number of parameters, thus reducing overfitting.

It allows gradient descent to set many of the parameters to zero, thus making the connections sparse.

> It allows a feature detector to be used in multiple locations throughout the whole input image/input volume.

10. In lecture we talked about “sparsity of connections” as a benefit of using convolutional layers. What does this mean?

Each filter is connected to every channel in the previous layer.

> Each activation in the next layer depends on only a small number of activations from the previous layer.

Each layer in a convolutional network is connected only to two other layers

Regularization causes gradient descent to set many of the parameters to zero.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
## Week 2 Quiz - Deep convolutional models

1. Which of the following do you typically see as you move to deeper layers in a ConvNet?

nH and nW increases, while nC decreases

nH and nW decreases, while nC also decreases

nH and nW increases, while nC also increases

> nH and nW decrease, while nC increases

2. Which of the following do you typically see in a ConvNet? (Check all that apply.)

> Multiple CONV layers followed by a POOL layer

Multiple POOL layers followed by a CONV layer

> FC layers in the last few layers

FC layers in the first few layers

3. In order to be able to build very deep networks, we usually only use pooling layers to downsize the height/width of the activation volumes while convolutions are used with “valid” padding. Otherwise, we would downsize the input of the model too quickly.

True

> False

4. Training a deeper network (for example, adding additional layers to the network) allows the network to fit more complex functions and thus almost always results in lower training error. For this question, assume we’re referring to “plain” networks.

True

> False

5. The following equation captures the computation in a ResNet block. What goes into the two blanks above?
```
a[l+2]=g(W[l+2]g(W[l+1]a[l]+b[l+1])+bl+2+_______ )+_______
```
> a[l] and 0, respectively

0 and z[l+1], respectively

z[l] and a[l], respectively

0 and a[l], respectively

6. Which ones of the following statements on Residual Networks are true? (Check all that apply.)

> Using a skip-connection helps the gradient to backpropagate and thus helps you to train deeper networks

A ResNet with L layers would have on the order of L2 skip connections in total.

> The skip-connections compute a complex non-linear function of the input to pass to a deeper layer in the network.

The skip-connection makes it easy for the network to learn an identity mapping between the input and the output within the ResNet block.

7. Suppose you have an input volume of dimension 64x64x16. How many parameters would a single 1x1 convolutional filter have (including the bias)?

2

4097

1

> 17

8. Suppose you have an input volume of dimension nH x nW x nC. Which of the following statements you agree with? (Assume that “1x1 convolutional layer” below always uses a stride of 1 and no padding.)

> You can use a 1x1 convolutional layer to reduce nC but not nH, nW.

You can use a 1x1 convolutional layer to reduce nH, nW, and nC.

> You can use a pooling layer to reduce nH, nW, but not nC.

You can use a pooling layer to reduce nH, nW, and nC.

9. Which ones of the following statements on Inception Networks are true? (Check all that apply.)

> A single inception block allows the network to use a combination of 1x1, 3x3, 5x5 convolutions and pooling.

Making an inception network deeper (by stacking more inception blocks together) should not hurt training set performance.

> Inception blocks usually use 1x1 convolutions to reduce the input data volume’s size before applying 3x3 and 5x5 convolutions.

Inception networks incorporates a variety of network architectures (similar to dropout, which randomly chooses a network architecture on each step) and thus has a similar regularizing effect as dropout.

10. Which of the following are common reasons for using open-source implementations of ConvNets (both the model and/or weights)? Check all that apply.

A model trained for one computer vision task can usually be used to perform data augmentation even for a different computer vision task.

> It is a convenient way to get working an implementation of a complex ConvNet architecture.

The same techniques for winning computer vision competitions, such as using multiple crops at test time, are widely used in practical deployments (or production system deployments) of ConvNets.

> Parameters trained for one computer vision task are often useful as pretraining for other computer vision tasks.
Loading