Skip to content

Comments

fix tqdm progress bar using refresh() instead of update()#21539

Open
Mr-Neutr0n wants to merge 2 commits intoLightning-AI:masterfrom
Mr-Neutr0n:fix/tqdm-update-instead-of-refresh
Open

fix tqdm progress bar using refresh() instead of update()#21539
Mr-Neutr0n wants to merge 2 commits intoLightning-AI:masterfrom
Mr-Neutr0n:fix/tqdm-update-instead-of-refresh

Conversation

@Mr-Neutr0n
Copy link

@Mr-Neutr0n Mr-Neutr0n commented Feb 15, 2026

The _update_n helper was directly setting bar.n and calling bar.refresh(), which bypasses tqdm's update() logic entirely. This means EMA-based rate estimation (_ema_dn, _ema_dt) and dynamic miniters adjustment never run, so things like custom smoothing values passed to tqdm don't actually work.

The fix swaps the manual bar.n = value; bar.refresh() for bar.update(value - bar.n), which computes the correct delta and lets tqdm's internal bookkeeping run properly. Since update() calls refresh() internally, the visual behavior stays the same.

Fixes #21320


📚 Documentation preview 📚: https://pytorch-lightning--21539.org.readthedocs.build/en/21539/

The _update_n helper was directly setting bar.n and calling
bar.refresh(), which bypasses tqdm's update() logic. This meant
EMA-based rate estimation and dynamic miniters were never updated,
breaking things like custom smoothing values.

Switched to bar.update(value - bar.n) so that tqdm's internal
bookkeeping runs properly.

Fixes Lightning-AI#21320
@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Feb 15, 2026
@codecov
Copy link

codecov bot commented Feb 20, 2026

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
3229 2 3227 582
View the full list of 2 ❄️ flaky test(s)
tests/tests_pytorch/utilities/test_compile.py::test_trainer_compiled_model_test

Flake rate in main: 25.00% (Passed 27 times, Failed 9 times)

Stack Traces | 0.12s run time
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_trainer_compiled_model_te0')

    @pytest.mark.skipif(sys.platform == "darwin", reason="fatal error: 'omp.h' file not found")
    @pytest.mark.skipif(not _PYTHON_GREATER_EQUAL_3_9_0, reason="AssertionError: failed to reach fixed point")
    @pytest.mark.xfail(
        sys.platform == "win32" and _TORCH_GREATER_EQUAL_2_2, strict=False, reason="RuntimeError: Failed to import"
    )
    @RunIf(dynamo=True)
    @mock.patch.dict(os.environ, {})
    def test_trainer_compiled_model_test(tmp_path):
        model = BoringModel()
        compiled_model = torch.compile(model)
    
        trainer = Trainer(
            default_root_dir=tmp_path,
            fast_dev_run=True,
            enable_checkpointing=False,
            enable_model_summary=False,
            enable_progress_bar=False,
            accelerator="cpu",
        )
>       trainer.test(compiled_model)

utilities/test_compile.py:209: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:821: in test
    return call._call_and_handle_interrupt(
../../.venv/lib/python3.10.../pytorch/trainer/call.py:49: in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:864: in _test_impl
    results = self._run(model, ckpt_path=ckpt_path, weights_only=weights_only)
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:1079: in _run
    results = self._run_stage()
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:1116: in _run_stage
    return self._evaluation_loop.run()
../../.venv/lib/python3.10.../pytorch/loops/utilities.py:179: in _decorator
    return loop_run(self, *args, **kwargs)
../../.venv/lib/python3.10.../pytorch/loops/evaluation_loop.py:146: in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
../../.venv/lib/python3.10.../pytorch/loops/evaluation_loop.py:441: in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
../../.venv/lib/python3.10.../pytorch/trainer/call.py:329: in _call_strategy_hook
    output = fn(*args, **kwargs)
../../.venv/lib/python3.10.../pytorch/strategies/strategy.py:425: in test_step
    return self.lightning_module.test_step(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/eval_frame.py:328: in _fn
    return fn(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/eval_frame.py:490: in catch_errors
    return callback(frame, cache_entry, hooks, frame_state)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:641: in _convert_frame
    result = inner_convert(frame, cache_size, hooks, frame_state)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:133: in _fn
    return fn(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:389: in _convert_frame_assert
    return _compile(
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:569: in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:491: in compile_inner
    out_code = transform_code_object(code, transform)
../../.venv/lib/python3.10.../torch/_dynamo/bytecode_transformation.py:1028: in transform_code_object
    transformations(instructions, code_options)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:458: in transform
    tracer.run()
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:2074: in run
    super().run()
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:724: in run
    and self.step()
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:688: in step
    getattr(self, inst.opname)(inst)
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:2162: in RETURN_VALUE
    self.output.compile_subgraph(
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:857: in compile_subgraph
    self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
....../usr/lib/python3.10/contextlib.py:79: in inner
    return func(*args, **kwds)
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:957: in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:1024: in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:1009: in call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
../../.venv/lib/python3.10.../_dynamo/repro/after_dynamo.py:117: in debug_wrapper
    compiled_gm = compiler_fn(gm, example_inputs)
../../.venv/lib/python3.10.../site-packages/torch/__init__.py:1568: in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:1150: in compile_fx
    return aot_autograd(
../../.venv/lib/python3.10.../_dynamo/backends/common.py:55: in compiler_fn
    cg = aot_module_simplified(gm, example_inputs, **kwargs)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:3891: in aot_module_simplified
    compiled_fn = create_aot_dispatcher_function(
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:3429: in create_aot_dispatcher_function
    compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:2212: in aot_wrapper_dedupe
    return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:2392: in aot_wrapper_synthetic_base
    return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:1573: in aot_dispatch_base
    compiled_fw = compiler(fw_module, flat_args)
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:1092: in fw_compiler_base
    return inner_compile(
../../.venv/lib/python3.10.../_dynamo/repro/after_aot.py:80: in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
../../.venv/lib/python3.10.../torch/_inductor/debug.py:228: in inner
    return fn(*args, **kwargs)
....../usr/lib/python3.10/contextlib.py:79: in inner
    return func(*args, **kwds)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:54: in newFunction
    return old_func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:341: in compile_fx_inner
    compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:565: in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
../../.venv/lib/python3.10.../torch/_inductor/graph.py:970: in compile_to_fn
    return self.compile_to_module().call
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/graph.py:938: in compile_to_module
    code, linemap = self.codegen()
../../.venv/lib/python3.10.../torch/_inductor/graph.py:915: in codegen
    self.scheduler.codegen()
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/scheduler.py:1690: in codegen
    self.get_backend(device).codegen_nodes(node.get_nodes())
../../.venv/lib/python3.10.../_inductor/codegen/cpp.py:2805: in codegen_nodes
    cpp_kernel_proxy = CppKernelProxy(kernel_group)
../../.venv/lib/python3.10.../_inductor/codegen/cpp.py:2352: in __init__
    self.picked_vec_isa: codecache.VecISA = codecache.pick_vec_isa()
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:633: in pick_vec_isa
    _valid_vec_isa_list: List[VecISA] = valid_vec_isa_list()
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:627: in valid_vec_isa_list
    if str(isa) in _cpu_info_content and isa:
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:548: in __bool__
    cpp_compile_command(
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:878: in cpp_compile_command
    ipaths, lpaths, libs, macros = get_include_and_linking_paths(
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:747: in get_include_and_linking_paths
    from torch.utils import cpp_extension
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    import copy
    import glob
    import importlib
    import importlib.abc
    import os
    import re
    import shlex
    import shutil
    import setuptools
    import subprocess
    import sys
    import sysconfig
    import warnings
    import collections
    from pathlib import Path
    import errno
    
    import torch
    import torch._appdirs
    from .file_baton import FileBaton
    from ._cpp_extension_versioner import ExtensionVersioner
    from .hipify import hipify_python
    from .hipify.hipify_python import GeneratedFileCleaner
    from typing import Dict, List, Optional, Union, Tuple
    from torch.torch_version import TorchVersion
    
    from setuptools.command.build_ext import build_ext
>   from pkg_resources import packaging  # type: ignore[attr-defined]
E   torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
E   ModuleNotFoundError: No module named 'pkg_resources'
E   
E   Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E   
E   
E   You can suppress this exception and fall back to eager by setting:
E       import torch._dynamo
E       torch._dynamo.config.suppress_errors = True

../../.venv/lib/python3.10.../torch/utils/cpp_extension.py:28: BackendCompilerFailed
tests/tests_pytorch/utilities/test_compile.py::test_trainer_compiled_model_that_logs

Flake rate in main: 33.33% (Passed 2 times, Failed 1 times)

Stack Traces | 5.86s run time
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_trainer_compiled_model_th0')

    @pytest.mark.skipif(sys.platform == "darwin", reason="fatal error: 'omp.h' file not found")
    @pytest.mark.skipif(not _PYTHON_GREATER_EQUAL_3_9_0, reason="AssertionError: failed to reach fixed point")
    @pytest.mark.xfail(
        sys.platform == "win32" and _TORCH_GREATER_EQUAL_2_2, strict=False, reason="RuntimeError: Failed to import"
    )
    @RunIf(dynamo=True)
    @mock.patch.dict(os.environ, {})
    def test_trainer_compiled_model_that_logs(tmp_path):
        class MyModel(BoringModel):
            def training_step(self, batch, batch_idx):
                loss = self.step(batch)
                self.log("loss", loss)
                return loss
    
        model = MyModel()
        compiled_model = torch.compile(model)
    
        trainer = Trainer(
            default_root_dir=tmp_path,
            fast_dev_run=True,
            enable_checkpointing=False,
            enable_model_summary=False,
            enable_progress_bar=False,
            accelerator="cpu",
        )
>       trainer.fit(compiled_model)

utilities/test_compile.py:184: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:584: in fit
    call._call_and_handle_interrupt(
../../.venv/lib/python3.10.../pytorch/trainer/call.py:49: in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:630: in _fit_impl
    self._run(model, ckpt_path=ckpt_path, weights_only=weights_only)
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:1079: in _run
    results = self._run_stage()
../../.venv/lib/python3.10.../pytorch/trainer/trainer.py:1123: in _run_stage
    self.fit_loop.run()
../../.venv/lib/python3.10.../pytorch/loops/fit_loop.py:217: in run
    self.advance()
../../.venv/lib/python3.10.../pytorch/loops/fit_loop.py:465: in advance
    self.epoch_loop.run(self._data_fetcher)
../../.venv/lib/python3.10.../pytorch/loops/training_epoch_loop.py:153: in run
    self.advance(data_fetcher)
../../.venv/lib/python3.10.../pytorch/loops/training_epoch_loop.py:352: in advance
    batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
../../.venv/lib/python3.10.../loops/optimization/automatic.py:192: in run
    self._optimizer_step(batch_idx, closure)
../../.venv/lib/python3.10.../loops/optimization/automatic.py:270: in _optimizer_step
    call._call_lightning_module_hook(
../../.venv/lib/python3.10.../pytorch/trainer/call.py:177: in _call_lightning_module_hook
    output = fn(*args, **kwargs)
../../.venv/lib/python3.10.../pytorch/core/module.py:1368: in optimizer_step
    optimizer.step(closure=optimizer_closure)
../../.venv/lib/python3.10.../pytorch/core/optimizer.py:154: in step
    step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
../../.venv/lib/python3.10.../pytorch/strategies/strategy.py:239: in optimizer_step
    return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
../../.venv/lib/python3.10.../plugins/precision/precision.py:123: in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
../../.venv/lib/python3.10.../torch/optim/lr_scheduler.py:68: in wrapper
    return wrapped(*args, **kwargs)
../../.venv/lib/python3.10.../torch/optim/optimizer.py:76: in _use_grad
    ret = func(self, *args, **kwargs)
../../.venv/lib/python3.10.../torch/optim/sgd.py:66: in step
    loss = closure()
../../.venv/lib/python3.10.../plugins/precision/precision.py:109: in _wrap_closure
    closure_result = closure()
../../.venv/lib/python3.10.../loops/optimization/automatic.py:146: in __call__
    self._result = self.closure(*args, **kwargs)
../../.venv/lib/python3.10.../torch/utils/_contextlib.py:115: in decorate_context
    return func(*args, **kwargs)
../../.venv/lib/python3.10.../loops/optimization/automatic.py:131: in closure
    step_output = self._step_fn()
../../.venv/lib/python3.10.../loops/optimization/automatic.py:319: in _training_step
    training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
../../.venv/lib/python3.10.../pytorch/trainer/call.py:329: in _call_strategy_hook
    output = fn(*args, **kwargs)
../../.venv/lib/python3.10.../pytorch/strategies/strategy.py:391: in training_step
    return self.lightning_module.training_step(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/eval_frame.py:328: in _fn
    return fn(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/eval_frame.py:490: in catch_errors
    return callback(frame, cache_entry, hooks, frame_state)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:641: in _convert_frame
    result = inner_convert(frame, cache_size, hooks, frame_state)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:133: in _fn
    return fn(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:389: in _convert_frame_assert
    return _compile(
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:569: in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:491: in compile_inner
    out_code = transform_code_object(code, transform)
../../.venv/lib/python3.10.../torch/_dynamo/bytecode_transformation.py:1028: in transform_code_object
    transformations(instructions, code_options)
../../.venv/lib/python3.10.../torch/_dynamo/convert_frame.py:458: in transform
    tracer.run()
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:2074: in run
    super().run()
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:724: in run
    and self.step()
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:688: in step
    getattr(self, inst.opname)(inst)
../../.venv/lib/python3.10.../torch/_dynamo/symbolic_convert.py:439: in wrapper
    self.output.compile_subgraph(self, reason=reason)
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:857: in compile_subgraph
    self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
....../usr/lib/python3.10/contextlib.py:79: in inner
    return func(*args, **kwds)
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:957: in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:1024: in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
../../.venv/lib/python3.10.../torch/_dynamo/output_graph.py:1009: in call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
../../.venv/lib/python3.10.../_dynamo/repro/after_dynamo.py:117: in debug_wrapper
    compiled_gm = compiler_fn(gm, example_inputs)
../../.venv/lib/python3.10.../site-packages/torch/__init__.py:1568: in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:1150: in compile_fx
    return aot_autograd(
../../.venv/lib/python3.10.../_dynamo/backends/common.py:55: in compiler_fn
    cg = aot_module_simplified(gm, example_inputs, **kwargs)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:3891: in aot_module_simplified
    compiled_fn = create_aot_dispatcher_function(
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:3429: in create_aot_dispatcher_function
    compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:2212: in aot_wrapper_dedupe
    return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:2392: in aot_wrapper_synthetic_base
    return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
../../.venv/lib/python3.10.../torch/_functorch/aot_autograd.py:2917: in aot_dispatch_autograd
    compiled_fw_func = aot_config.fw_compiler(
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:1092: in fw_compiler_base
    return inner_compile(
../../.venv/lib/python3.10.../_dynamo/repro/after_aot.py:80: in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
../../.venv/lib/python3.10.../torch/_inductor/debug.py:228: in inner
    return fn(*args, **kwargs)
....../usr/lib/python3.10/contextlib.py:79: in inner
    return func(*args, **kwds)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:54: in newFunction
    return old_func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:341: in compile_fx_inner
    compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
../../.venv/lib/python3.10.../torch/_inductor/compile_fx.py:565: in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
../../.venv/lib/python3.10.../torch/_inductor/graph.py:970: in compile_to_fn
    return self.compile_to_module().call
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/graph.py:938: in compile_to_module
    code, linemap = self.codegen()
../../.venv/lib/python3.10.../torch/_inductor/graph.py:915: in codegen
    self.scheduler.codegen()
../../.venv/lib/python3.10.../torch/_dynamo/utils.py:189: in time_wrapper
    r = func(*args, **kwargs)
../../.venv/lib/python3.10.../torch/_inductor/scheduler.py:1690: in codegen
    self.get_backend(device).codegen_nodes(node.get_nodes())
../../.venv/lib/python3.10.../_inductor/codegen/cpp.py:2805: in codegen_nodes
    cpp_kernel_proxy = CppKernelProxy(kernel_group)
../../.venv/lib/python3.10.../_inductor/codegen/cpp.py:2352: in __init__
    self.picked_vec_isa: codecache.VecISA = codecache.pick_vec_isa()
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:633: in pick_vec_isa
    _valid_vec_isa_list: List[VecISA] = valid_vec_isa_list()
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:627: in valid_vec_isa_list
    if str(isa) in _cpu_info_content and isa:
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:548: in __bool__
    cpp_compile_command(
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:878: in cpp_compile_command
    ipaths, lpaths, libs, macros = get_include_and_linking_paths(
../../.venv/lib/python3.10.../torch/_inductor/codecache.py:747: in get_include_and_linking_paths
    from torch.utils import cpp_extension
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    import copy
    import glob
    import importlib
    import importlib.abc
    import os
    import re
    import shlex
    import shutil
    import setuptools
    import subprocess
    import sys
    import sysconfig
    import warnings
    import collections
    from pathlib import Path
    import errno
    
    import torch
    import torch._appdirs
    from .file_baton import FileBaton
    from ._cpp_extension_versioner import ExtensionVersioner
    from .hipify import hipify_python
    from .hipify.hipify_python import GeneratedFileCleaner
    from typing import Dict, List, Optional, Union, Tuple
    from torch.torch_version import TorchVersion
    
    from setuptools.command.build_ext import build_ext
>   from pkg_resources import packaging  # type: ignore[attr-defined]
E   torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
E   ModuleNotFoundError: No module named 'pkg_resources'
E   
E   Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E   
E   
E   You can suppress this exception and fall back to eager by setting:
E       import torch._dynamo
E       torch._dynamo.config.suppress_errors = True

../../.venv/lib/python3.10.../torch/utils/cpp_extension.py:28: BackendCompilerFailed

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pl Generic label for PyTorch Lightning package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

TQDMProgressBar calls tqdm.refresh instead of tqdm.update.

2 participants