Skip to content

Commit 7a8f5fa

Browse files
author
Dean Wampler
committed
Refinements to work better on the platform
1 parent 8d1efe4 commit 7a8f5fa

File tree

5 files changed

+25
-6
lines changed

5 files changed

+25
-6
lines changed

ray-tune/01-Understanding-Hyperparameter-Tuning.ipynb

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,9 @@
169169
"\n",
170170
"The next cell runs Tune for this purpose. The comments explain what each argument does. We'll do four tries, one for each combination of the two possible values for the two hidden layers.\n",
171171
"\n",
172-
"> **Note:** `tune.run` will handle Ray initialization for us, if it isn't already initialized. To force Tune to throw an error instead, pass the argument `ray_auto_init=False`."
172+
"> **Note:** `tune.run` will handle Ray initialization for us, if it isn't already initialized. To force Tune to throw an error instead, pass the argument `ray_auto_init=False`.\n",
173+
"\n",
174+
"The next cell will take 5-6 minutes to run."
173175
]
174176
},
175177
{
@@ -187,7 +189,8 @@
187189
" config={\n",
188190
" \"env\": \"CartPole-v1\", # Tune can associate this string with the environment.\n",
189191
" \"num_gpus\": 0, # If you have GPUs, go for it!\n",
190-
" \"num_workers\": 6, # Number of Ray workers to use (arbitrary choice).\n",
192+
" \"num_workers\": 3, # Number of Ray workers to use; Use one LESS than \n",
193+
" # the number of cores you wan to use (or omit this argument)!\n",
191194
" \"model\": { # The NN model we'll optimize.\n",
192195
" 'fcnet_hiddens': [ # \"Fully-connected network with N hidden layers\".\n",
193196
" tune.grid_search([20, 40]), # Try these four values for layer one.\n",
@@ -277,7 +280,7 @@
277280
"cell_type": "markdown",
278281
"metadata": {},
279282
"source": [
280-
"We see from this table that the `[20,20]` hyperparameter set took the *most* training iterations, which is understandable as it is the least powerful network configuration. The corresponding number of timesteps was the longest. In contrast, `[40,40]` was the fastest to train with almost the same `episode_reward_mean` value.\n",
283+
"We see from this table that the `[20,20]` hyperparameter set took the *most* training iterations, which is understandable as it is the least powerful network configuration. The corresponding number of timesteps was the longest. In contrast, `[40,20]` and `[40,40]` are the fastest to train with almost the same `episode_reward_mean` value.\n",
281284
"\n",
282285
"Since all four combinations perform equally well, perhaps it's best to choose the largest network as it trains the fastest. If we need to train the neural network frequently, then fast training times might be most important. This also suggests that we should be sure the trial sizes we used are really best. In a real-world application, you would want to spend more time on HPO, trying a larger set of possible values."
283286
]

ray-tune/02-Ray-Tune-with-MNIST.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -349,7 +349,7 @@
349349
"\n",
350350
"We'll try both, starting with the functional API.\n",
351351
"\n",
352-
"We add a stopping criterion, `stop={\"training_iteration\": 10}`, so this will go quickly. Consider removing this condition if you don't mind waiting."
352+
"We add a stopping criterion, `stop={\"training_iteration\": 20}`, so this will go reasonably quickly, while still producing good results. Consider removing this condition if you don't mind waiting longer and you want optimal results."
353353
]
354354
},
355355
{

ray-tune/03-Search-Algos-and-Schedulers.ipynb

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,23 @@
3535
"A limitation of search algorithms used by themselves is they can't affect or stop training processes, for example early stopping of trail that are performing poorly. The schedulers can do this, so it's common to use a compatible search algorithm with a scheduler, as we'll show in the first example."
3636
]
3737
},
38+
{
39+
"cell_type": "code",
40+
"execution_count": 1,
41+
"metadata": {},
42+
"outputs": [
43+
{
44+
"name": "stdout",
45+
"output_type": "stream",
46+
"text": [
47+
"Python 3.7.6\n"
48+
]
49+
}
50+
],
51+
"source": [
52+
"!python --version"
53+
]
54+
},
3855
{
3956
"cell_type": "markdown",
4057
"metadata": {},

ray-tune/04-Ray-SGD.ipynb

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,6 @@
120120
" data_creator=data_creator,\n",
121121
" optimizer_creator=optimizer_creator,\n",
122122
" loss_creator=torch.nn.MSELoss,\n",
123-
" num_workers=2,\n",
124123
" use_gpu=False,\n",
125124
" config={\"batch_size\": 64})"
126125
]

ray-tune/solutions/01-Understanding-Hyperparameter-Tuning-Solutions.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@
8888
" config={\n",
8989
" \"env\": \"CartPole-v1\",\n",
9090
" \"num_gpus\": 0,\n",
91-
" \"num_workers\": 6,\n",
91+
" \"num_workers\": 3,\n",
9292
" \"model\": {\n",
9393
" 'fcnet_hiddens': [\n",
9494
" tune.grid_search(sizes),\n",

0 commit comments

Comments
 (0)