Several suggestions on "Building a multilayer perception for classifying flowers in the Iris dataset" starting from page 395, Chapter 12 #118
baifanhorst
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
(1) At the top of page 396, the dataset is splitted into a training set and a test set via the following code:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1./3, random_state=1)
I don't know why the argument 'stratify' is not specified. Personally, I suggest setting stratify=y, just as the previous chapters. The code is modified as
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1./3, random_state=1, stratify=y)
(2) On page 397, in the codes for training the neural network, there is one line for the calculation of the loss for a single batch:
loss = loss_fn(prediction, y_batch)
This line causes an error, because the labels in y_batch are int32. One needs to convert it to long int, and the code should be changed as follows:
loss = loss_fn(prediction, y_batch.long())
(3) Still in the codes for training the neural network on page 397, at the end of each epoch, the accumulated loss and accuracy are divided by len(train_dl.dataset) as follows:
loss_hist[epoch] /= len(train_dl.dataset)
accuracy_list[epoch] /=len(train_dl.dataset)
Note that len(train_dl.dataset) is the number of examples in the training set. The first line of the codes is correct since the loss for each batch is multiplied by the batch size before it is added to loss_hist[epoch], via the following code:
loss_list[epoch] += loss.item() * y_batch.size(0)
However, for the accuracy, it should be divided by the number of batches, not the number of examples. The second line of the codes should be modified as
accuracy_list[epoch] /=(len(train_dl.dataset)/batch_size)
Note that batch_size=2. Without the modification, the accuracy will approach 0.5, not 1.
Beta Was this translation helpful? Give feedback.
All reactions