Skip to content

Commit 1f614a8

Browse files
committed
Tutorials: Add parentheses to function name references in markdown and comments.
1 parent 5c86c75 commit 1f614a8

19 files changed

+103
-103
lines changed

tutorials/accelerated-python/notebooks/fundamentals/01__numpy_intro__ndarray_basics.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@
166166
"\n",
167167
"Most operations, like adding two arrays together, returns a **Copy**, which requires allocating a new array, which can negatively impact performance.\n",
168168
"\n",
169-
"Some operations, like transposing or `reshape` often return a **View** instead of a **Copy**. A View only changes the metadata (`shape` and `strides`) without duplicating the physical data, making these operations nearly instantaneous.\n",
169+
"Some operations, like transposing or `reshape()` often return a **View** instead of a **Copy**. A View only changes the metadata (`shape` and `strides`) without duplicating the physical data, making these operations nearly instantaneous.\n",
170170
"\n",
171171
"Most Copy operations take an `out` parameter that takes an array; if it provided, the result is written to that array instead of allocating a new one. For example, `A + B` or `np.add(A, B)` will return a new array with the result, but `np.add(A + B, out=A)` will place the result in `A` without an allocation.\n",
172172
"\n",
@@ -175,7 +175,7 @@
175175
"**Quick Docs**\n",
176176
"- `np.linspace(start, stop, num)`: Returns `num` evenly spaced samples, calculated over the interval $[\\text{start}, \\text{stop}]$.\n",
177177
"- `np.random.default_rng().random(size)`: Returns random floats in $[0.0, 1.0)$. `size` can be a tuple.\n",
178-
"- `arr.sort`: Sorts an array in-place (modifies the original data). Use `np.sort(arr)` to return a sorted copy.\n",
178+
"- `arr.sort()`: Sorts an array in-place (modifies the original data). Use `np.sort(arr)` to return a sorted copy.\n",
179179
"- `arr.reshape(new_shape)`: Returns a View with a new shape. One dimension can be -1, instructing NumPy to calculate the size automatically.\n",
180180
"- `np.resize(arr, new_shape)`: Returns a new array with the specified shape. If the new shape is larger, it fills the new elements by repeating the original array.\n"
181181
]

tutorials/accelerated-python/notebooks/fundamentals/03__numpy_to_cupy__ndarray_basics.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
"\n",
6161
"Let's compare the performance of creating a large 3D array (approx. 100 MB in size) on the CPU versus the GPU.\n",
6262
"\n",
63-
"We will use `np.ones` for the CPU and `cp.ones` for the GPU.\n"
63+
"We will use `np.ones()` for the CPU and `cp.ones()` for the GPU.\n"
6464
]
6565
},
6666
{
@@ -92,11 +92,11 @@
9292
"source": [
9393
"We can see here that creating this array on the GPU is much faster than doing so on the CPU!\n",
9494
"\n",
95-
"**About `cupyx.profiler.benchmark`:**\n",
95+
"**About `cupyx.profiler.benchmark()`:**\n",
9696
"\n",
97-
"We use CuPy's built-in `benchmark` utility for timing GPU operations. This is important because GPU operations are **asynchronous** - when you call a CuPy function, the CPU places a task in the GPU's \"to-do list\" (stream) and immediately moves on without waiting.\n",
97+
"We use CuPy's built-in `benchmark()` utility for timing GPU operations. This is important because GPU operations are **asynchronous** - when you call a CuPy function, the CPU places a task in the GPU's \"to-do list\" (stream) and immediately moves on without waiting.\n",
9898
"\n",
99-
"The `benchmark` function handles all the complexity of proper GPU timing for us:\n",
99+
"The `benchmark()` function handles all the complexity of proper GPU timing for us:\n",
100100
"- It automatically synchronizes GPU streams to get accurate measurements.\n",
101101
"- It runs warm-up iterations to avoid cold-start overhead.\n",
102102
"- It reports both CPU wall-clock times (`cpu_times`) and GPU kernel times (`gpu_times`). We use `cpu_times` for all comparisons because it measures end-to-end wall-clock time, giving a fair apples-to-apples comparison between CPU and GPU code.\n",
@@ -286,13 +286,13 @@
286286
"\n",
287287
"A key feature of CuPy is that many **NumPy functions work on CuPy arrays without changing your code**.\n",
288288
"\n",
289-
"When you pass a CuPy GPU array (`x_gpu`) into a NumPy function that supports the `__array_function__` protocol (e.g., `np.linalg.svd`), NumPy detects the CuPy input and **delegates the operation to CuPy’s own implementation**, which runs on the GPU.\n",
289+
"When you pass a CuPy GPU array (`x_gpu`) into a NumPy function that supports the `__array_function__` protocol (e.g., `np.linalg.svd()`), NumPy detects the CuPy input and **delegates the operation to CuPy’s own implementation**, which runs on the GPU.\n",
290290
"\n",
291291
"This allows you to write code using standard `np.*` syntax and have it run on either CPU or GPU seamlessly - **as long as CuPy implements an override for that function.**\n",
292292
"\n",
293293
"One common source of hidden performance penalties is **implicit transfers between CPU and GPU**. In some cases, CuPy guards against this: for example, when NumPy tries to convert a `cupy.ndarray` into a `numpy.ndarray` via the `__array__` protocol (e.g. `np.asarray(gpu_array)`), CuPy raises a `TypeError` instead of silently copying data to the host. \n",
294294
"\n",
295-
"However, CuPy **does** perform implicit GPU → CPU transfers in other cases, such as printing a GPU array, converting to a Python scalar (e.g. `float`, `.item`), or evaluating a GPU scalar in a boolean context. We will explore these implicit transfers in a later notebook."
295+
"However, CuPy **does** perform implicit GPU → CPU transfers in other cases, such as printing a GPU array, converting to a Python scalar (e.g. `float`, `.item()`), or evaluating a GPU scalar in a boolean context. We will explore these implicit transfers in a later notebook."
296296
]
297297
},
298298
{
@@ -369,7 +369,7 @@
369369
"2. Change the setup line to `xp = cp` (GPU Mode). Run it again.\n",
370370
"3. Observe how the exact same logic runs significantly faster on the GPU with CuPy while retaining the implementation properties of NumPy.\n",
371371
"\n",
372-
"Note: We use `cupyx.profiler.benchmark` for timing, which automatically handles GPU synchronization."
372+
"Note: We use `cupyx.profiler.benchmark()` for timing, which automatically handles GPU synchronization."
373373
]
374374
},
375375
{
@@ -421,7 +421,7 @@
421421
"id": "077b7589",
422422
"metadata": {},
423423
"source": [
424-
"**TODO: When working with CuPy arrays, try changing `xp.testing.assert_allclose` to `np.testing.assert_allclose`. What happens and why?**"
424+
"**TODO: When working with CuPy arrays, try changing `xp.testing.assert_allclose()` to `np.testing.assert_allclose()`. What happens and why?**"
425425
]
426426
},
427427
{
@@ -436,7 +436,7 @@
436436
"\n",
437437
"**TODO:** \n",
438438
"1) **Generate Data:** Create a NumPy array (`y_cpu`) and a CuPy array (`y_gpu`) representing $\\sin(x)$ from $0$ to $2\\pi$ with `50,000,000` points.\n",
439-
"2) **Benchmark CPU and GPU:** Use `benchmark` from `cupyx.profiler` to measure both `np.sort` and `cp.sort`."
439+
"2) **Benchmark CPU and GPU:** Use `benchmark()` from `cupyx.profiler` to measure both `np.sort()` and `cp.sort()`."
440440
]
441441
},
442442
{
@@ -462,14 +462,14 @@
462462
"# Step 2.) Benchmark NumPy (CPU)\n",
463463
"print(\"Benchmarking NumPy Sort (this may take a few seconds)...\")\n",
464464
"# TODO: Use cpx.profiler.benchmark(function, (args,), n_repeat=5, n_warmup=1)\n",
465-
"# Hint: Pass the function `np.sort` and the argument `(y_cpu,)`\n",
465+
"# Hint: Pass the function `np.sort()` and the argument `(y_cpu,)`\n",
466466
"# Note: The comma in (y_cpu,) is required to make it a tuple!\n",
467467
"\n",
468468
"\n",
469469
"# Step 3.) Benchmark CuPy (GPU)\n",
470470
"print(\"Benchmarking CuPy Sort...\")\n",
471471
"# TODO: Use cpx.profiler.benchmark(function, (args,), n_repeat=5, n_warmup=1)\n",
472-
"# Hint: Pass the function `cp.sort` and the argument `(y_gpu,)`\n",
472+
"# Hint: Pass the function `cp.sort()` and the argument `(y_gpu,)`\n",
473473
"# Note: The comma in (y_gpu,) is required to make it a tuple!"
474474
]
475475
},
@@ -480,7 +480,7 @@
480480
"id": "qnAvEk5QFAA8"
481481
},
482482
"source": [
483-
"**EXTRA CREDIT: Benchmark with different array sizes and find the size at which CuPy and NumPy take the same amount of time. Try to extract the timing data from `cupyx.profiler.benchmark`'s return value and customize how the output is displayed. You could even make a graph.**"
483+
"**EXTRA CREDIT: Benchmark with different array sizes and find the size at which CuPy and NumPy take the same amount of time. Try to extract the timing data from `cupyx.profiler.benchmark()`'s return value and customize how the output is displayed. You could even make a graph.**"
484484
]
485485
},
486486
{

tutorials/accelerated-python/notebooks/fundamentals/04__numpy_to_cupy__svd_reconstruction.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@
1414
"**TODO: Port this code to CuPy. Here's what you'll have to do:**\n",
1515
"\n",
1616
"- **Change `import numpy as xp` to `import cupy as xp`.**\n",
17-
"- **NumPy arrays are converted to CuPy arrays using `xp.asarray`. You'll see errors like `only supports cupy.ndarray` if you forget to do this.**\n",
18-
"- **CuPy arrays are converted back to NumPy arrays (for Matplotlib) using `xp.asnumpy`.**\n",
17+
"- **NumPy arrays are converted to CuPy arrays using `xp.asarray()`. You'll see errors like `only supports cupy.ndarray` if you forget to do this.**\n",
18+
"- **CuPy arrays are converted back to NumPy arrays (for Matplotlib) using `xp.asnumpy()`.**\n",
1919
"\n",
2020
"First, we need to import our modules:"
2121
]
@@ -323,7 +323,7 @@
323323
"\n",
324324
"Imagine you're measuring how long it takes to ship a package to someone, but you only time how long it takes for you to drop it off at the post office, not how long it takes for them to receive it and send you a thank you.\n",
325325
"\n",
326-
"Common Pythonic benchmarking tools like `%timeit` are not GPU aware, so it's easy to measure incorrectly with them. We can only use them when we know the code we're benchmarking will perform the proper synchronization. It's better to use something like [`cupyx.profiler.benchmark`](https://docs.cupy.dev/en/stable/reference/generated/cupyx.profiler.benchmark.html#cupyx.profiler.benchmark).\n",
326+
"Common Pythonic benchmarking tools like `%timeit` are not GPU aware, so it's easy to measure incorrectly with them. We can only use them when we know the code we're benchmarking will perform the proper synchronization. It's better to use something like [`cupyx.profiler.benchmark()`](https://docs.cupy.dev/en/stable/reference/generated/cupyx.profiler.benchmark.html#cupyx.profiler.benchmark).\n",
327327
"\n",
328328
"First, we need a NumPy (CPU) and CuPy (GPU) copy of our image:"
329329
]
@@ -380,7 +380,7 @@
380380
"id": "TE6qPht1xAkm"
381381
},
382382
"source": [
383-
"Depending on your hardware, the CPU and GPU might be close to the same speed, or the GPU might even be slower! This is because the image is not big enough to fully utilize the GPU. We can simulate a larger image by tiling the image using `np.tile`. This duplicates the image both along axis 0 and axis 1:"
383+
"Depending on your hardware, the CPU and GPU might be close to the same speed, or the GPU might even be slower! This is because the image is not big enough to fully utilize the GPU. We can simulate a larger image by tiling the image using `np.tile()`. This duplicates the image both along axis 0 and axis 1:"
384384
]
385385
},
386386
{
@@ -435,7 +435,7 @@
435435
"id": "5nlgOqkBxAkw"
436436
},
437437
"source": [
438-
"**TODO: Experiment with different sizes of image by changing the `np.tile` arguments. When is the GPU faster?**"
438+
"**TODO: Experiment with different sizes of image by changing the `np.tile()` arguments. When is the GPU faster?**"
439439
]
440440
}
441441
],

tutorials/accelerated-python/notebooks/fundamentals/05__memory_spaces__power_iteration.ipynb

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
"\n",
3838
"CuPy silently transfers and synchronizes when you:\n",
3939
"1. **Print** a GPU array (`print(gpu_array)`).\n",
40-
"2. **Convert** to a Python scalar (`float(gpu_array)` or `.item`).\n",
40+
"2. **Convert** to a Python scalar (`float(gpu_array)` or `.item()`).\n",
4141
"3. **Evaluate** a GPU scalar in a boolean context (`if gpu_scalar > 0:`).\n",
4242
"\n",
4343
"#### The Task\n",
@@ -221,9 +221,9 @@
221221
"Now it's your turn! Your task is to convert the `estimate_host` function to run on the GPU using CuPy.\n",
222222
"\n",
223223
"**Remember the rules of Memory Spaces:**\n",
224-
"1. **Transfer:** Move `A_host` from CPU to GPU using `cp.asarray`.\n",
224+
"1. **Transfer:** Move `A_host` from CPU to GPU using `cp.asarray()`.\n",
225225
"2. **Compute:** Perform math using `cp` functions on the GPU.\n",
226-
"3. **Retrieve:** Move result back to CPU using `cp.asnumpy`.\n",
226+
"3. **Retrieve:** Move result back to CPU using `cp.asnumpy()`.\n",
227227
"\n",
228228
"**Hint:** CuPy tries to replicate the NumPy API. In many cases, you can simply change `np.` to `cp.`. However, CuPy operations *must* run on data present in Device Memory.\n",
229229
"\n",
@@ -303,10 +303,10 @@
303303
"Your task is to convert the `generate_host` function to generate the matrix directly on the GPU using CuPy's random functions.\n",
304304
"\n",
305305
"**Hints:**\n",
306-
"- Use `cp.random.seed` instead of `np.random.seed`\n",
307-
"- Use `cp.random.random` instead of `np.random.random`\n",
308-
"- Use `cp.random.permutation` instead of `np.random.permutation`\n",
309-
"- Use `cp.concatenate`, `cp.array`, `cp.diag`, and `cp.linalg.inv`\n",
306+
"- Use `cp.random.seed()` instead of `np.random.seed()`\n",
307+
"- Use `cp.random.random()` instead of `np.random.random()`\n",
308+
"- Use `cp.random.permutation()` instead of `np.random.permutation()`\n",
309+
"- Use `cp.concatenate()`, `cp.array()`, `cp.diag()`, and `cp.linalg.inv()`\n",
310310
"\n",
311311
"**The code below starts as a copy of the CPU implementation. Modify it to generate data directly on the GPU:**\n"
312312
]
@@ -392,7 +392,7 @@
392392
"source": [
393393
"### 5. Verification and Benchmarking\n",
394394
"\n",
395-
"Finally, let's verify our accuracy against a reference implementation (`numpy.linalg.eigvals`) and benchmark the speedup.\n"
395+
"Finally, let's verify our accuracy against a reference implementation (`numpy.linalg.eigvals()`) and benchmark the speedup.\n"
396396
]
397397
},
398398
{
@@ -433,7 +433,7 @@
433433
"id": "f092af24",
434434
"metadata": {},
435435
"source": [
436-
"#### Benchmarking with `cupyx.profiler.benchmark`\n",
436+
"#### Benchmarking with `cupyx.profiler.benchmark()`\n",
437437
"\n",
438438
"We use CuPy's built-in benchmarking utility for accurate GPU timing. This handles warmup and synchronization automatically.\n",
439439
"\n",

tutorials/accelerated-python/notebooks/fundamentals/06__asynchrony__power_iteration.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -222,7 +222,7 @@
222222
"\n",
223223
"There's two ways that we can filter and annotate what we see in Nsight systems.\n",
224224
"\n",
225-
"The first is to limit when we start and stop profiling in the program. In Python, we can do this with `cupyx.profiler.profile`, which give us a Python context manager. Any CUDA code used during scope will be included in the profile.\n",
225+
"The first is to limit when we start and stop profiling in the program. In Python, we can do this with `cupyx.profiler.profile()`, which give us a Python context manager. Any CUDA code used during scope will be included in the profile.\n",
226226
"\n",
227227
"```\n",
228228
"not_in_the_profile()\n",
@@ -233,7 +233,7 @@
233233
"\n",
234234
"For this to work, we have to pass `--capture-range=cudaProfilerApi --capture-range-end=stop` as flags to `nsys`.\n",
235235
"\n",
236-
"We can also annotate specific regions of our code, which will show up in the profiler. We can even add categories, domains, and colors to these regions, and they can be nested. To add these annotations, we use `nvtx.annotate`, another Python context manager, this time from a library called NVTX.\n",
236+
"We can also annotate specific regions of our code, which will show up in the profiler. We can even add categories, domains, and colors to these regions, and they can be nested. To add these annotations, we use `nvtx.annotate()`, another Python context manager, this time from a library called NVTX.\n",
237237
"\n",
238238
"```\n",
239239
"with nvtx.annotate(\"Loop\"):\n",
@@ -244,8 +244,8 @@
244244
"\n",
245245
"**TODO:** Go back to the earlier cells and improve the profile results by adding:\n",
246246
"\n",
247-
"- `nvtx.annotate` regions. Remember, you can nest them.\n",
248-
"- A `cpx.profiler.profile` around the `start =`/`stop =` lines that run the solver.\n",
247+
"- `nvtx.annotate()` regions. Remember, you can nest them.\n",
248+
"- A `cpx.profiler.profile()` around the `start =`/`stop =` lines that run the solver.\n",
249249
"- `--capture-range=cudaProfilerApi --capture-range-end=stop` to the `nsys` flags.\n",
250250
"\n",
251251
"Then, capture another profile and see if you can identify how we can improve the code. Specifically, think about how we could add more asynchrony."
@@ -262,10 +262,10 @@
262262
"\n",
263263
"Remember what we've learned about streams and how to use them with CuPy:\n",
264264
"\n",
265-
"- By default, all CuPy operations within a single thread run on the same stream. You can access this stream with `cp.cuda.get_current_stream`.\n",
265+
"- By default, all CuPy operations within a single thread run on the same stream. You can access this stream with `cp.cuda.get_current_stream()`.\n",
266266
"- You can create a new stream with `cp.cuda.Stream(non_blocking=True)`. Use `with` statements to use the stream for all CuPy operations within a block.\n",
267-
"- You can record an event on a stream by calling `.record` on it.\n",
268-
"- You can synchronize on an event (or an entire stream) by calling `.synchronize` on it.\n",
267+
"- You can record an event on a stream by calling `.record()` on it.\n",
268+
"- You can synchronize on an event (or an entire stream) by calling `.synchronize()` on it.\n",
269269
"- Memory transfers will block by default. You can launch them asynchronously with `cp.asarray(..., blocking=False)` (for host to device transfers) and `cp.asnumpy(..., blocking=False)` (for device to host transfers).\n",
270270
"\n",
271271
"**TODO:** Copy your NVTX annotated code from before into the cell below (make sure not to overwrite the %%writefile), and modify the code to improve performance by adding asynchrony."

tutorials/accelerated-python/notebooks/fundamentals/07__cuda_core__devices_streams_and_memory.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -211,7 +211,7 @@
211211
"source": [
212212
"**What this does**:\n",
213213
"1. `Device(0)`: Creates a Device object representing the first GPU (GPU numbering starts at 0)\n",
214-
"1. `device.set_current`: Tells CUDA \"I want to use this GPU for my operations\"\n",
214+
"1. `device.set_current()`: Tells CUDA \"I want to use this GPU for my operations\"\n",
215215
"\n",
216216
"If you have multiple GPUs, CUDA needs to know which one you want to use, which is why we need `set_current`"
217217
]
@@ -345,7 +345,7 @@
345345
"source": [
346346
"**What this does:**\n",
347347
"1. Calculate size: We figure out how many bytes we need (1000 floats × 4 bytes each)\n",
348-
"2. Allocate memory: `device.allocate` reserves space on the GPU\n",
348+
"2. Allocate memory: `device.allocate()` reserves space on the GPU\n",
349349
"3. Get a buffer: The returned device_buffer is like a \"handle\" to our GPU memory\n",
350350
"\n",
351351
"**Important**: Just like with regular Python programming, allocating memory doesn't put any meaningful data there yet. It's just reserved empty space."

0 commit comments

Comments
 (0)