Skip to content

Commit 1461a16

Browse files
dependabot[bot]justinchubygithub-advanced-security[bot]
authored
Bump ruff from 0.5.4 to 0.9.1 (microsoft#23328)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.5.4 to 0.9.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/astral-sh/ruff/releases">ruff's releases</a>.</em></p> <blockquote> <h2>0.9.1</h2> <h2>Release Notes</h2> <h3>Preview features</h3> <ul> <li>[<code>pycodestyle</code>] Run <code>too-many-newlines-at-end-of-file</code> on each cell in notebooks (<code>W391</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15308">#15308</a>)</li> <li>[<code>ruff</code>] Omit diagnostic for shadowed private function parameters in <code>used-dummy-variable</code> (<code>RUF052</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15376">#15376</a>)</li> </ul> <h3>Rule changes</h3> <ul> <li>[<code>flake8-bugbear</code>] Improve <code>assert-raises-exception</code> message (<code>B017</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15389">#15389</a>)</li> </ul> <h3>Formatter</h3> <ul> <li>Preserve trailing end-of line comments for the last string literal in implicitly concatenated strings (<a href="https://redirect.github.com/astral-sh/ruff/pull/15378">#15378</a>)</li> </ul> <h3>Server</h3> <ul> <li>Fix a bug where the server and client notebooks were out of sync after reordering cells (<a href="https://redirect.github.com/astral-sh/ruff/pull/15398">#15398</a>)</li> </ul> <h3>Bug fixes</h3> <ul> <li>[<code>flake8-pie</code>] Correctly remove wrapping parentheses (<code>PIE800</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15394">#15394</a>)</li> <li>[<code>pyupgrade</code>] Handle comments and multiline expressions correctly (<code>UP037</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15337">#15337</a>)</li> </ul> <h2>Contributors</h2> <ul> <li><a href="https://github.com/AntoineD"><code>@​AntoineD</code></a></li> <li><a href="https://github.com/InSyncWithFoo"><code>@​InSyncWithFoo</code></a></li> <li><a href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li> <li><a href="https://github.com/calumy"><code>@​calumy</code></a></li> <li><a href="https://github.com/dcreager"><code>@​dcreager</code></a></li> <li><a href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li> <li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li> <li><a href="https://github.com/sharkdp"><code>@​sharkdp</code></a></li> <li><a href="https://github.com/tjkuson"><code>@​tjkuson</code></a></li> </ul> <h2>Install ruff 0.9.1</h2> <h3>Install prebuilt binaries via shell script</h3> <pre lang="sh"><code>curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/ruff/releases/download/0.9.1/ruff-installer.sh | sh </code></pre> <h3>Install prebuilt binaries via powershell script</h3> <pre lang="sh"><code>powershell -ExecutionPolicy ByPass -c &quot;irm https://github.com/astral-sh/ruff/releases/download/0.9.1/ruff-installer.ps1 | iex&quot; </code></pre> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's changelog</a>.</em></p> <blockquote> <h2>0.9.1</h2> <h3>Preview features</h3> <ul> <li>[<code>pycodestyle</code>] Run <code>too-many-newlines-at-end-of-file</code> on each cell in notebooks (<code>W391</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15308">#15308</a>)</li> <li>[<code>ruff</code>] Omit diagnostic for shadowed private function parameters in <code>used-dummy-variable</code> (<code>RUF052</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15376">#15376</a>)</li> </ul> <h3>Rule changes</h3> <ul> <li>[<code>flake8-bugbear</code>] Improve <code>assert-raises-exception</code> message (<code>B017</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15389">#15389</a>)</li> </ul> <h3>Formatter</h3> <ul> <li>Preserve trailing end-of line comments for the last string literal in implicitly concatenated strings (<a href="https://redirect.github.com/astral-sh/ruff/pull/15378">#15378</a>)</li> </ul> <h3>Server</h3> <ul> <li>Fix a bug where the server and client notebooks were out of sync after reordering cells (<a href="https://redirect.github.com/astral-sh/ruff/pull/15398">#15398</a>)</li> </ul> <h3>Bug fixes</h3> <ul> <li>[<code>flake8-pie</code>] Correctly remove wrapping parentheses (<code>PIE800</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15394">#15394</a>)</li> <li>[<code>pyupgrade</code>] Handle comments and multiline expressions correctly (<code>UP037</code>) (<a href="https://redirect.github.com/astral-sh/ruff/pull/15337">#15337</a>)</li> </ul> <h2>0.9.0</h2> <p>Check out the <a href="https://astral.sh/blog/ruff-v0.9.0">blog post</a> for a migration guide and overview of the changes!</p> <h3>Breaking changes</h3> <p>Ruff now formats your code according to the 2025 style guide. As a result, your code might now get formatted differently. See the formatter section for a detailed list of changes.</p> <p>This release doesn’t remove or remap any existing stable rules.</p> <h3>Stabilization</h3> <p>The following rules have been stabilized and are no longer in preview:</p> <ul> <li><a href="https://docs.astral.sh/ruff/rules/stdlib-module-shadowing/"><code>stdlib-module-shadowing</code></a> (<code>A005</code>). This rule has also been renamed: previously, it was called <code>builtin-module-shadowing</code>.</li> <li><a href="https://docs.astral.sh/ruff/rules/builtin-lambda-argument-shadowing/"><code>builtin-lambda-argument-shadowing</code></a> (<code>A006</code>)</li> <li><a href="https://docs.astral.sh/ruff/rules/slice-to-remove-prefix-or-suffix/"><code>slice-to-remove-prefix-or-suffix</code></a> (<code>FURB188</code>)</li> <li><a href="https://docs.astral.sh/ruff/rules/boolean-chained-comparison/"><code>boolean-chained-comparison</code></a> (<code>PLR1716</code>)</li> <li><a href="https://docs.astral.sh/ruff/rules/decimal-from-float-literal/"><code>decimal-from-float-literal</code></a> (<code>RUF032</code>)</li> <li><a href="https://docs.astral.sh/ruff/rules/post-init-default/"><code>post-init-default</code></a> (<code>RUF033</code>)</li> <li><a href="https://docs.astral.sh/ruff/rules/useless-if-else/"><code>useless-if-else</code></a> (<code>RUF034</code>)</li> </ul> <p>The following behaviors have been stabilized:</p> <ul> <li><a href="https://docs.astral.sh/ruff/rules/pytest-parametrize-names-wrong-type/"><code>pytest-parametrize-names-wrong-type</code></a> (<code>PT006</code>): Detect <a href="https://docs.pytest.org/en/7.1.x/how-to/parametrize.html#parametrize"><code>pytest.parametrize</code></a> calls outside decorators and calls with keyword arguments.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/astral-sh/ruff/commit/12f86f39a4691e44b62c11dd4bc376a16e358f43"><code>12f86f3</code></a> Ruff 0.9.1 (<a href="https://redirect.github.com/astral-sh/ruff/issues/15407">#15407</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/2b28d566a4a891339a43a35c818f5b155c0b9edd"><code>2b28d56</code></a> Associate a trailing end-of-line comment in a parenthesized implicit concaten...</li> <li><a href="https://github.com/astral-sh/ruff/commit/adca7bd95cf315ca14e34ab3eac6deb73e154f1d"><code>adca7bd</code></a> Remove pygments pin (<a href="https://redirect.github.com/astral-sh/ruff/issues/15404">#15404</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/6b98a26452ec1bde8b445c82c097d03c78213c1d"><code>6b98a26</code></a> [red-knot] Support <code>assert_type</code> (<a href="https://redirect.github.com/astral-sh/ruff/issues/15194">#15194</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/c87463842a6e19976b6f3401137b6932e4a7bb71"><code>c874638</code></a> [red-knot] Move tuple-containing-Never tests to Markdown (<a href="https://redirect.github.com/astral-sh/ruff/issues/15402">#15402</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/c364b586f9177a22f4556f86e434f21dfaf82c38"><code>c364b58</code></a> [<code>flake8-pie</code>] Correctly remove wrapping parentheses (<code>PIE800</code>) (<a href="https://redirect.github.com/astral-sh/ruff/issues/15394">#15394</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/73d424ee5e6963d577e196d71c3b19c82e84e612"><code>73d424e</code></a> Fix outdated doc for handling the default file types with the pre-commit hook...</li> <li><a href="https://github.com/astral-sh/ruff/commit/6e9ff445fd8559972b423370de20563a9c2db8d4"><code>6e9ff44</code></a> Insert the cells from the <code>start</code> position (<a href="https://redirect.github.com/astral-sh/ruff/issues/15398">#15398</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/f2c3ddc5eaa2ce107a200e134be82fc36afce06b"><code>f2c3ddc</code></a> [red-knot] Move intersection type tests to Markdown (<a href="https://redirect.github.com/astral-sh/ruff/issues/15396">#15396</a>)</li> <li><a href="https://github.com/astral-sh/ruff/commit/b861551b6ac928c25136d76151162f6fefc9cf71"><code>b861551</code></a> Remove unnecessary backticks (<a href="https://redirect.github.com/astral-sh/ruff/issues/15393">#15393</a>)</li> <li>Additional commits viewable in <a href="https://github.com/astral-sh/ruff/compare/0.5.4...0.9.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.5.4&new-version=0.9.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Justin Chu <[email protected]> Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
1 parent 6a7ea5c commit 1461a16

35 files changed

+74
-79
lines changed

docs/python/examples/plot_train_convert_predict.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -212,9 +212,9 @@ def sess_predict_proba_rf(x):
212212
rf.fit(X_train, y_train)
213213
initial_type = [("float_input", FloatTensorType([1, 4]))]
214214
onx = convert_sklearn(rf, initial_types=initial_type)
215-
with open("rf_iris_%d.onnx" % n_trees, "wb") as f:
215+
with open(f"rf_iris_{n_trees}.onnx", "wb") as f:
216216
f.write(onx.SerializeToString())
217-
sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees, providers=rt.get_available_providers())
217+
sess = rt.InferenceSession(f"rf_iris_{n_trees}.onnx", providers=rt.get_available_providers())
218218

219219
def sess_predict_proba_loop(x):
220220
return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] # noqa: B023

onnxruntime/python/tools/quantization/calibrate.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ class CalibrationMethod(Enum):
161161
class CalibrationDataReader(metaclass=abc.ABCMeta):
162162
@classmethod
163163
def __subclasshook__(cls, subclass):
164-
return hasattr(subclass, "get_next") and callable(subclass.get_next) or NotImplemented
164+
return (hasattr(subclass, "get_next") and callable(subclass.get_next)) or NotImplemented
165165

166166
@abc.abstractmethod
167167
def get_next(self) -> dict:

onnxruntime/python/tools/quantization/quant_utils.py

+1-5
Original file line numberDiff line numberDiff line change
@@ -907,11 +907,7 @@ def smooth_distribution(p, eps=0.0001):
907907
# raise ValueError('The discrete probability distribution is malformed. All entries are 0.')
908908
return None
909909
eps1 = eps * float(n_zeros) / float(n_nonzeros)
910-
assert eps1 < 1.0, "n_zeros=%d, n_nonzeros=%d, eps1=%f" % (
911-
n_zeros,
912-
n_nonzeros,
913-
eps1,
914-
)
910+
assert eps1 < 1.0, f"n_zeros={n_zeros}, n_nonzeros={n_nonzeros}, eps1={eps1}"
915911

916912
hist = p.astype(numpy.float32)
917913
hist += eps * is_zeros + (-eps1) * is_nonzeros

onnxruntime/python/tools/tensorrt/perf/build/ort_build_latest.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,8 @@ def main():
4444
cmake_tar = "cmake-3.28.3-linux-x86_64.tar.gz"
4545
if not os.path.exists(cmake_tar):
4646
subprocess.run(["wget", "-c", "https://cmake.org/files/v3.28/" + cmake_tar], check=True)
47-
tar = tarfile.open(cmake_tar)
48-
tar.extractall()
49-
tar.close()
47+
with tarfile.open(cmake_tar) as tar:
48+
tar.extractall()
5049

5150
os.environ["PATH"] = os.path.join(os.path.abspath("cmake-3.28.3-linux-x86_64"), "bin") + ":" + os.environ["PATH"]
5251
os.environ["CUDACXX"] = os.path.join(args.cuda_home, "bin", "nvcc")

onnxruntime/python/tools/tensorrt/perf/setup_scripts/setup_onnx_zoo.py

+7-5
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,13 @@ def create_model_folder(model):
1717
def extract_and_get_files(file_name):
1818
model_folder = file_name.replace(".tar.gz", "") + "/"
1919
create_model_folder(model_folder)
20-
model_tar = tarfile.open(file_name)
21-
model_tar.extractall(model_folder)
22-
file_list = model_tar.getnames()
23-
file_list.sort()
24-
model_tar.close()
20+
with tarfile.open(file_name) as model_tar:
21+
for member in model_tar.getmembers():
22+
if os.path.isabs(member.name) or ".." in member.name:
23+
raise ValueError(f"Illegal tar archive entry: {member.name}")
24+
model_tar.extractall(model_folder)
25+
file_list = model_tar.getnames()
26+
file_list.sort()
2527
return model_folder, file_list
2628

2729

onnxruntime/test/python/onnxruntime_test_python.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ def cuda_device_count(self, cuda_lib):
8585
if result != 0:
8686
error_str = ctypes.c_char_p()
8787
cuda_lib.cuGetErrorString(result, ctypes.byref(error_str))
88-
print("cuDeviceGetCount failed with error code %d: %s" % (result, error_str.value.decode()))
88+
print(f"cuDeviceGetCount failed with error code {result}: {error_str.value.decode()}")
8989
return -1
9090
return num_device.value
9191

onnxruntime/test/python/onnxruntime_test_python_iobinding.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ def test_bind_onnx_types_not_supported_by_numpy(self):
221221
)
222222

223223
for inner_device, provider in devices:
224-
for onnx_dtype in onnx_to_torch_type_map:
224+
for onnx_dtype, torch_dtype in onnx_to_torch_type_map.items():
225225
with self.subTest(onnx_dtype=onnx_dtype, inner_device=str(inner_device)):
226226

227227
# Create onnx graph with dynamic axes
@@ -239,7 +239,6 @@ def test_bind_onnx_types_not_supported_by_numpy(self):
239239

240240
sess = onnxrt.InferenceSession(model_def.SerializeToString(), providers=provider)
241241

242-
torch_dtype = onnx_to_torch_type_map[onnx_dtype]
243242
x = torch.arange(8).to(torch_dtype)
244243
y = torch.empty(8, dtype=torch_dtype)
245244

onnxruntime/test/python/onnxruntime_test_scatternd.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -88,8 +88,8 @@ def common_scatter(self, opset, providers, dtype, reduction, expected_names):
8888
self.assertEqual(expected_names, names)
8989

9090
sonx = str(onx).replace(" ", "").replace("\n", "|")
91-
sexp = 'op_type:"Cast"|attribute{|name:"to"|type:INT|i:%d|}' % itype
92-
sexp2 = 'op_type:"Cast"|attribute{|name:"to"|i:%d|type:INT|}' % itype
91+
sexp = 'op_type:"Cast"|attribute{|name:"to"|type:INT|i:%d|}' % itype # noqa: UP031
92+
sexp2 = 'op_type:"Cast"|attribute{|name:"to"|i:%d|type:INT|}' % itype # noqa: UP031
9393
assert sexp in sonx or sexp2 in sonx, f"Unable to find a substring in {sonx!r}"
9494
if providers == ["CPUExecutionProvider"]:
9595
return

onnxruntime/test/python/quantization/op_test_utils.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -379,10 +379,10 @@ def check_op_type_count(testcase, model_path, **kwargs):
379379
if node.op_type in optype2count:
380380
optype2count[node.op_type] += 1
381381

382-
for op_type in kwargs:
382+
for op_type, value in kwargs.items():
383383
try:
384384
testcase.assertEqual(
385-
kwargs[op_type],
385+
value,
386386
optype2count[op_type],
387387
f"op_type {op_type} count not same",
388388
)

onnxruntime/test/python/quantization/test_calibration.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -361,8 +361,8 @@ def test_compute_data(self):
361361
min_max_pairs = list(zip(rmin, rmax))
362362
output_names = [infer_session.get_outputs()[i].name for i in range(len(infer_session.get_outputs()))]
363363
output_min_max_dict = dict(zip(output_names, min_max_pairs))
364-
for output_name in output_min_max_dict:
365-
self.assertEqual(output_min_max_dict[output_name], tensors_range[output_name].range_value)
364+
for output_name, min_max in output_min_max_dict.items():
365+
self.assertEqual(min_max, tensors_range[output_name].range_value)
366366

367367
def test_histogram_calibrators_run(self):
368368
"""
@@ -524,8 +524,8 @@ def test_compute_data_per_channel(self):
524524
min_max_pairs = list(zip(rmin, rmax))
525525
output_names = [infer_session.get_outputs()[i].name for i in range(len(infer_session.get_outputs()))]
526526
output_min_max_dict = dict(zip(output_names, min_max_pairs))
527-
for output_name in output_min_max_dict:
528-
np.testing.assert_equal(output_min_max_dict[output_name], tensors_range[output_name].range_value)
527+
for output_name, min_max in output_min_max_dict.items():
528+
np.testing.assert_equal(min_max, tensors_range[output_name].range_value)
529529

530530

531531
if __name__ == "__main__":

onnxruntime/test/python/transformers/test_data/bert_squad_tensorflow2.1_keras2onnx_opset11/generate_tiny_keras2onnx_bert_models.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -291,9 +291,9 @@ def resize_model(self):
291291
reshapes[initializer.name] = new_shape
292292
print("initializer", initializer.name, tensor.shape, "=>", new_shape)
293293

294-
for initializer_name in reshapes:
294+
for initializer_name, reshape_name in reshapes.items():
295295
self.replace_input_of_all_nodes(initializer_name, initializer_name + "_resize")
296-
tensor = self.resize_weight(initializer_name, reshapes[initializer_name])
296+
tensor = self.resize_weight(initializer_name, reshape_name)
297297
self.model.graph.initializer.extend([tensor])
298298

299299
self.use_dynamic_axes()

onnxruntime/test/python/transformers/test_data/gpt2_pytorch1.5_opset11/generate_tiny_gpt2_model.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -331,9 +331,9 @@ def resize_model(self):
331331
reshapes[initializer.name] = new_shape
332332
print("initializer", initializer.name, tensor.shape, "=>", new_shape)
333333

334-
for initializer_name in reshapes:
334+
for initializer_name, reshape_name in reshapes.items():
335335
self.replace_input_of_all_nodes(initializer_name, initializer_name + "_resize")
336-
tensor = self.resize_weight(initializer_name, reshapes[initializer_name])
336+
tensor = self.resize_weight(initializer_name, reshape_name)
337337
self.model.graph.initializer.extend([tensor])
338338

339339
# Add node name, replace split node attribute.

onnxruntime/test/testdata/test_data_generation/lr_scheduler/lr_scheduler_test_data_generator.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ def main():
6060

6161
import tempfile
6262

63-
fp = tempfile.NamedTemporaryFile()
63+
fp = tempfile.NamedTemporaryFile() # noqa: SIM115
6464

6565
adamw_optimizer = torch.optim.AdamW(pt_model.parameters(), lr=1e-3)
6666
scheduler = WarmupLinearSchedule(adamw_optimizer, num_warmup_steps, num_training_steps)

orttraining/orttraining/python/training/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@
1515
__all__ = [
1616
"PropagateCastOpsStrategy",
1717
"TrainingParameters",
18-
"is_ortmodule_available",
1918
"amp",
2019
"artifacts",
20+
"is_ortmodule_available",
2121
"optim",
2222
]
2323

orttraining/orttraining/python/training/_utils.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -175,8 +175,8 @@ def static_vars(**kwargs):
175175
"""
176176

177177
def decorate(func):
178-
for k in kwargs:
179-
setattr(func, k, kwargs[k])
178+
for k, v in kwargs.items():
179+
setattr(func, k, v)
180180
return func
181181

182182
return decorate

orttraining/orttraining/python/training/onnxblock/__init__.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,15 @@
1212
from onnxruntime.training.onnxblock.onnxblock import ForwardBlock, TrainingBlock
1313

1414
__all__ = [
15-
"blocks",
16-
"loss",
17-
"optim",
1815
"Block",
1916
"ForwardBlock",
2017
"TrainingBlock",
21-
"load_checkpoint_to_model",
22-
"save_checkpoint",
2318
"base",
19+
"blocks",
2420
"custom_op_library",
2521
"empty_base",
22+
"load_checkpoint_to_model",
23+
"loss",
24+
"optim",
25+
"save_checkpoint",
2626
]

orttraining/orttraining/python/training/onnxblock/optim/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@
33

44
from onnxruntime.training.onnxblock.optim.optim import SGD, AdamW, ClipGradNorm
55

6-
__all__ = ["AdamW", "ClipGradNorm", "SGD"]
6+
__all__ = ["SGD", "AdamW", "ClipGradNorm"]

orttraining/orttraining/python/training/optim/_megatron_modifier.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,8 @@
1818
class LegacyMegatronLMModifier(FP16OptimizerModifier):
1919
def __init__(self, optimizer, **kwargs) -> None:
2020
super().__init__(optimizer)
21-
self.get_horizontal_model_parallel_rank = kwargs.get("get_horizontal_model_parallel_rank", None)
22-
self.get_horizontal_model_parallel_group = kwargs.get("get_horizontal_model_parallel_group", None)
21+
self.get_horizontal_model_parallel_rank = kwargs.get("get_horizontal_model_parallel_rank")
22+
self.get_horizontal_model_parallel_group = kwargs.get("get_horizontal_model_parallel_group")
2323

2424
def can_be_modified(self):
2525
return self.check_requirements(

orttraining/orttraining/python/training/ortmodule/_runtime_inspector.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ def find_memory_optimization_opportunity(self, execution_agent: TrainingAgent, r
229229

230230
apply_config.append(",".join(recompute_configs))
231231

232-
self._json_file_for_layerwise_recompute = tempfile.NamedTemporaryFile(mode="w+")
232+
self._json_file_for_layerwise_recompute = tempfile.NamedTemporaryFile(mode="w+") # noqa: SIM115
233233
json.dump(apply_config, self._json_file_for_layerwise_recompute)
234234
self._json_file_for_layerwise_recompute.flush()
235235
runtime_options.memory_optimizer_config_file_path = self._json_file_for_layerwise_recompute.name

orttraining/orttraining/python/training/utils/__init__.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -24,17 +24,17 @@
2424
)
2525

2626
__all__ = [
27-
"PrimitiveType",
28-
"ORTModelInputOutputType",
2927
"ORTModelInputOutputSchemaType",
28+
"ORTModelInputOutputType",
29+
"PTable",
30+
"PrimitiveType",
3031
"extract_data_and_schema",
31-
"unflatten_data_using_schema",
32-
"torch_nvtx_range_push",
33-
"torch_nvtx_range_pop",
34-
"nvtx_function_decorator",
3532
"log_memory_usage",
36-
"pytorch_type_to_onnx_dtype",
33+
"nvtx_function_decorator",
3734
"onnx_dtype_to_pytorch_dtype",
3835
"pytorch_scalar_type_to_pytorch_dtype",
39-
"PTable",
36+
"pytorch_type_to_onnx_dtype",
37+
"torch_nvtx_range_pop",
38+
"torch_nvtx_range_push",
39+
"unflatten_data_using_schema",
4040
]

orttraining/orttraining/python/training/utils/hooks/__init__.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@
77
import torch
88

99
__all__ = [
10-
"StatisticsSubscriber",
1110
"GlobalSubscriberManager",
12-
"inspect_activation",
11+
"StatisticsSubscriber",
1312
"ZeROOffloadSubscriber",
1413
"configure_ort_compatible_zero_stage3",
14+
"inspect_activation",
1515
]
1616

1717
from ._statistics_subscriber import StatisticsSubscriber, _InspectActivation

orttraining/orttraining/python/training/utils/torch_io_helper.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ def get_primitive_dtype(value):
5252
class _TensorStub:
5353
"""Tensor stub class used to represent model's input or output"""
5454

55-
__slots__ = ["tensor_idx", "name", "dtype", "shape", "shape_dims"]
55+
__slots__ = ["dtype", "name", "shape", "shape_dims", "tensor_idx"]
5656

5757
def __init__(
5858
self,

orttraining/orttraining/test/python/orttraining_test_model_transform.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
def add_name(model):
55
for i, node in enumerate(model.graph.node):
6-
node.name = "%s_%d" % (node.op_type, i)
6+
node.name = f"{node.op_type}_{i}"
77

88

99
def find_single_output_node(model, arg):

orttraining/orttraining/test/python/orttraining_test_ortmodule_bert_classifier.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,7 @@ def main():
376376
# Device (CPU vs CUDA)
377377
if torch.cuda.is_available() and not args.no_cuda:
378378
device = torch.device("cuda")
379-
print("There are %d GPU(s) available." % torch.cuda.device_count())
379+
print(f"There are {torch.cuda.device_count()} GPU(s) available.")
380380
print("We will use the GPU:", torch.cuda.get_device_name(0))
381381
else:
382382
print("No GPU available, using the CPU instead.")

orttraining/orttraining/test/python/orttraining_test_ortmodule_bert_classifier_autocast.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,7 @@ def main():
376376
# Device (CPU vs CUDA)
377377
if torch.cuda.is_available() and not args.no_cuda:
378378
device = torch.device("cuda")
379-
print("There are %d GPU(s) available." % torch.cuda.device_count())
379+
print(f"There are {torch.cuda.device_count()} GPU(s) available.")
380380
print("We will use the GPU:", torch.cuda.get_device_name(0))
381381
else:
382382
print("No GPU available, using the CPU instead.")

orttraining/orttraining/test/python/orttraining_test_ortmodule_pytorch_ddp.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ def demo_checkpoint(rank, world_size, use_ort_module):
112112
# 0 saves it.
113113
dist.barrier()
114114
# configure map_location properly
115-
map_location = {"cuda:%d" % 0: "cuda:%d" % rank}
115+
map_location = {"cuda:0": f"cuda:{rank}"}
116116
ddp_model.load_state_dict(torch.load(CHECKPOINT_PATH, map_location=map_location))
117117

118118
optimizer.zero_grad()

orttraining/tools/scripts/gpt2_model_transform.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
def add_name(model):
2020
for i, node in enumerate(model.graph.node):
21-
node.name = "%s_%d" % (node.op_type, i)
21+
node.name = f"{node.op_type}_{i}"
2222

2323

2424
def find_input_node(model, arg):
@@ -139,7 +139,7 @@ def process_concat(model):
139139
# insert new shape to reshape
140140
for index, reshape_node_index in enumerate(new_nodes):
141141
shape_tensor = numpy_helper.from_array(np.asarray(new_nodes[reshape_node_index], dtype=np.int64))
142-
const_node = add_const(model, "concat_shape_node_%d" % index, "concat_shape_%d" % index, shape_tensor)
142+
const_node = add_const(model, f"concat_shape_node_{index}", f"concat_shape_{index}", shape_tensor)
143143
reshape_node = model.graph.node[reshape_node_index]
144144
reshape_node.input[1] = const_node.output[0]
145145
# delete nodes
@@ -227,13 +227,13 @@ def process_dropout(model):
227227
if node.op_type == "Dropout":
228228
new_dropout = model.graph.node.add()
229229
new_dropout.op_type = "TrainableDropout"
230-
new_dropout.name = "TrainableDropout_%d" % index
230+
new_dropout.name = f"TrainableDropout_{index}"
231231
# make ratio node
232232
ratio = np.asarray([node.attribute[0].f], dtype=np.float32)
233233
print(ratio.shape)
234234
ratio_value = numpy_helper.from_array(ratio)
235235
ratio_node = add_const(
236-
model, "dropout_node_ratio_%d" % index, "dropout_node_ratio_%d" % index, t_value=ratio_value
236+
model, f"dropout_node_ratio_{index}", f"dropout_node_ratio_{index}", t_value=ratio_value
237237
)
238238
print(ratio_node)
239239
new_dropout.input.extend([node.input[0], ratio_node.output[0]])

orttraining/tools/scripts/model_transform.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
def add_name(model):
2020
for i, node in enumerate(model.graph.node):
21-
node.name = "%s_%d" % (node.op_type, i)
21+
node.name = f"{node.op_type}_{i}"
2222

2323

2424
def find_input_node(model, arg):
@@ -120,7 +120,7 @@ def process_concat(model):
120120
# insert new shape to reshape
121121
for index, reshape_node_index in enumerate(new_nodes):
122122
shape_tensor = numpy_helper.from_array(np.asarray(new_nodes[reshape_node_index], dtype=np.int64))
123-
const_node = add_const(model, "concat_shape_node_%d" % index, "concat_shape_%d" % index, shape_tensor)
123+
const_node = add_const(model, f"concat_shape_node_{index}", f"concat_shape_{index}", shape_tensor)
124124
reshape_node = model.graph.node[reshape_node_index]
125125
reshape_node.input[1] = const_node.output[0]
126126
# delete nodes
@@ -251,13 +251,13 @@ def process_dropout(model):
251251
if node.op_type == "Dropout":
252252
new_dropout = model.graph.node.add()
253253
new_dropout.op_type = "TrainableDropout"
254-
new_dropout.name = "TrainableDropout_%d" % index
254+
new_dropout.name = f"TrainableDropout_{index}"
255255
# make ratio node
256256
ratio = np.asarray([node.attribute[0].f], dtype=np.float32)
257257
print(ratio.shape)
258258
ratio_value = numpy_helper.from_array(ratio)
259259
ratio_node = add_const(
260-
model, "dropout_node_ratio_%d" % index, "dropout_node_ratio_%d" % index, t_value=ratio_value
260+
model, f"dropout_node_ratio_{index}", f"dropout_node_ratio_{index}", t_value=ratio_value
261261
)
262262
print(ratio_node)
263263
new_dropout.input.extend([node.input[0], ratio_node.output[0]])

0 commit comments

Comments
 (0)