Wavelet-aware adversarial perturbations. (a) PGD-based attack with small-magnitude noise, visually clean yet disruptive after compression. (b) Proposed wavelet-aware attack is also imperceptible but more stealthy. (c) Wavelet coefficients of (a) reveal widespread noise in flat regions. (d) Coefficients of (b) closely resemble the clean input, indicating reduced detectability.
Official implementation of "T-MLA: A targeted multiscale log–exponential attack framework for neural image compression" (Information Sciences, Q1).
Neural image compression (NIC) has become the state-of-the-art for rate-distortion performance, yet its security vulnerabilities remain significantly less understood than those of classifiers. Existing adversarial attacks on NICs are often naive adaptations of pixel-space methods, overlooking the unique, structured nature of the compression pipeline. In this work, we propose a more advanced class of vulnerabilities by introducing T-MLA, the first targeted multiscale log–exponential attack framework. We introduce adversarial perturbations in the wavelet domain that concentrate on less perceptually salient coefficients, improving the stealth of the attack. Extensive evaluation across multiple state-of-the-art NIC architectures on standard image compression benchmarks reveals a large drop in reconstruction quality while the perturbations remain visually imperceptible. On standard NIC benchmarks, T-MLA achieves targeted degradation of reconstruction quality while improving perturbation imperceptibility (higher PSNR/VIF of the perturbed inputs) compared to PGD-style baselines at comparable attack success, as summarized in our main results. Our findings reveal a critical security flaw at the core of generative and content delivery pipelines.

Package only: use in your own code (import tmla).
Full setup: run scripts/, demo.ipynb; requires LIC_TCM, weights, and data via init_project.py.
PyPI:
pip install tmlaGitHub (dev):
pip install git+https://github.com/nkalmykovsk/tmla.gitFrom source: clone, venv, install, then run init_project.py (fetch model and data).
git clone https://github.com/nkalmykovsk/tmla.git
cd tmla
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
python3 init_project.py # LIC_TCM + data + weightsIf you already installed the package (PyPI/GitHub) and only need scripts/demo/data: clone repo, then in the same env (or install dependencies there) run:
git clone https://github.com/nkalmykovsk/tmla.git
cd tmla
python3 init_project.pyDocker: image has dependencies; you still run init_project.py inside the container once:
docker build -t tmla-dev:latest .
docker run -d --gpus all --name tmla-dev -v "$(pwd)":/app -w /app tmla-dev:latest tail -f /dev/null
docker exec -it tmla-dev python3 init_project.pyinit_project.py: setup script (clones LIC_TCM, downloads data and weights).LIC_TCM/: created byinit_project.py; TCM model is loaded from here at runtime.tmla/: Python package:config,attacks(multiscale, decomposition, reconstruction, metrics),tcm(LIC_TCM loader),utils.scripts/: CLI:run_attack.py,run_batch_parallel.py,compute_entropy.py,collect_metrics.py,build_chart.py.
Single image:
python3 scripts/run_attack.py --image path/to/image.png --model model_nameBatch (dataset × model list):
python3 scripts/run_batch_parallel.pyComputes the normalized local Shannon entropy map and global complexity score (as in the paper):
python3 scripts/compute_entropy.py path/to/image.png --save entropy_map.png --showIf you use this code or the paper, please cite:
@article{kalmykov2026tmla,
title = {T-MLA: A Targeted Multiscale Log--Exponential Attack Framework for Neural Image Compression},
author = {Kalmykov, N. I. and Dibo, R. and Shen, K. and Zhonghan, X. and Phan, A. H. and Liu, Y. and Oseledets, I.},
journal = {Information Sciences},
volume = {702},
pages = {123143},
year = {2026},
publisher = {Elsevier},
doi = {10.1016/j.ins.2025.123143}
}