Skip to content

Commit 066dcee

Browse files
xinhe3XuehaoSun
authored andcommitted
[GAUDISW-245272] disable layer-wise test
Signed-off-by: xinhe3 <[email protected]>
1 parent e97aa96 commit 066dcee

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

test/3x/torch/quantization/fp8_quant/test_layer_wise.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
11
import torch
2+
import pytest
23
import habana_frameworks.torch.core as htcore
34

45
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
56
from neural_compressor.torch.quantization import FP8Config, convert, prepare, finalize_calibration
67
from neural_compressor.torch.utils import get_used_cpu_mem_MB
78

89

10+
@pytest.mark.skip(reason="https://github.com/huggingface/transformers/issues/43159")
911
def test_two_step_layer_wise():
1012
# layer-wise is based on memory mapping technique and https://github.com/huggingface/transformers/pull/31771
1113
# Workaround of [SW-208658]: torch.use_deterministic_algorithms(True) will break memory mapping

0 commit comments

Comments
 (0)