Skip to content

Commit 9607a66

Browse files
author
M.Notter
committed
Updates spelling
1 parent 6dbc8eb commit 9607a66

File tree

4 files changed

+87
-9
lines changed

4 files changed

+87
-9
lines changed

_posts/2023-10-23-02_tensorflow_simple.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ Before we can train the model we need to provide a few additional information:
179179
set with `validation_data`.
180180

181181
Finding the right parameters for any of that, as well as establishing the right model architecture, is the
182-
black arts of any deep learning practisioners. For this example, let's just go with some proven default
182+
black arts of any deep learning practitioners. For this example, let's just go with some proven default
183183
parameters.
184184

185185
```python

_posts/2023-10-23-03_scikit_advanced.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The California Housing dataset contains information about houses in California d
3737
- Features on different scales
3838
- Complex relationships between variables
3939

40-
The dataset iself contains information about the houses, including features like total area, lot shape, neighborhood information, overall quality, year built, etc. And the target feature that we would like to predict is the `SalePrice`.
40+
The dataset itself contains information about the houses, including features like total area, lot shape, neighborhood information, overall quality, year built, etc. And the target feature that we would like to predict is the `SalePrice`.
4141

4242
Let's load the data and take a look:
4343

@@ -138,7 +138,7 @@ If we look closer at the feature matrix X, we can see that of those 79 features,
138138
are of type 'object' (i.e. categorical features), and that some entries are missing. Plus, the target feature
139139
`SalePrice` has a right skewed value distribution.
140140

141-
Therefore, if possible, our pipeline should be able to handle all of this picularities. Even better, let's try
141+
Therefore, if possible, our pipeline should be able to handle all of this peculiarities. Even better, let's try
142142
to setup a pipeline that helps us to find the optimal way how to preprocess this dataset.
143143

144144
## 2. Feature Analysis
@@ -464,7 +464,7 @@ Prediction accuracy on test data: {score_te*100:.2f}%"
464464
Prediction accuracy on test data: 8.38%
465465

466466
Great, the score seems reasonably good! But now that we know better which preprocessing routine seems to be the
467-
best (thanks to `RandomizedSearchCV`), let's go ahead and furhter fine-tune the ridge model.
467+
best (thanks to `RandomizedSearchCV`), let's go ahead and further fine-tune the ridge model.
468468

469469
## 8. Fine tune best preprocessing pipeline
470470

@@ -476,7 +476,7 @@ To further fine tune the best preprocessing pipeline, we can just load the 'best
476476
# Select best estimator
477477
best_estimator = random_search.best_estimator_
478478

479-
# Specify new parmeter grid to explore
479+
# Specify new parameter grid to explore
480480
param_grid = {'regressor__ridge__alpha': np.logspace(-5, 5, 51)}
481481
```
482482

@@ -562,7 +562,7 @@ final_estimator = grid_search.set_params(
562562
_ = final_estimator.fit(X_tr, y_tr)
563563
```
564564

565-
Now that the model is ready and trained, we can go ahead and performe the feature importance investigation via
565+
Now that the model is ready and trained, we can go ahead and perform the feature importance investigation via
566566
permutation testing. To showcase one additional feature, let's actually perform this routine twice, once while
567567
focusing on the `r2` of the model, and once while focusing on the `neg_mean_absolute_percentage_error`.
568568

_posts/2023-10-23-04_tensorflow_advanced.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ y_te = df_te['Rings']
8989

9090
Size of training and test set: (3342, 11) | (835, 11)
9191

92-
An important step for any machine learning project is appropraite features scaling. Now, we could use something
92+
An important step for any machine learning project is appropriate features scaling. Now, we could use something
9393
like `scipy` or `scikit-learn` to do this task. But let's see how this can also be done directly with
9494
TensorFlow.
9595

@@ -443,7 +443,7 @@ kernel_init = [
443443
'normal',
444444
]
445445
kernel_regularizer = [None, 'l1', 'l2', 'l1_l2']
446-
batche_sizes = [32, 128]
446+
batch_sizes = [32, 128]
447447
```
448448

449449
Now, let's put all of this into a parameter grid.
@@ -459,7 +459,7 @@ param_grid = dict(
459459
optimizers=optimizers,
460460
kernel_init=kernel_init,
461461
kernel_regularizer=kernel_regularizer,
462-
batch_size=batche_sizes,
462+
batch_size=batch_sizes,
463463
)
464464

465465
# Go through the parameter grid

0 commit comments

Comments
 (0)