Skip to content

Large memory overhead when getting stress results #1664

Open
@janvonrickenbach

Description

@janvonrickenbach

Before submitting the issue

  • I have checked for Compatibility issues
  • I have searched among the existing issues
  • I am using a Python virtual environment

Description of the bug

Consider the snippet below which measures the peak memor consumption of reading a relatively large rst file (1 Mio elements, 1 GB size). The file can be found here The output is as follows:

Min used memory: 191.89 MB
Max used memory: 3267.09 MB
End data memory consumption: 137.6 MB

This means that the peak memory consumption for getting 137.6 MB data is is about 3.2 GB, which looks like much too high.

Some observations:

  • The original model I tested with actually has 10 time steps and the memory consumption is about 27 GB. It looks like the memory consumption roughly scales with the requested data size.
  • When requesting elemental or elemental_nodal data, the overhead is less (about 1.5 GB peak memory consumption)
  • The memory consumption is independent of the requested stress type (von mises, single component, all components)
def to_mb(x):
    return round(x / 1024.0 / 1024.0, 2)

import logging
from threading import Thread
import datetime
import time
import psutil

log = logging.getLogger(__name__)

class MemoryMonitor:
    def __init__(self, interval=0.005):
        self._stop = False
        self._thread = Thread(target=self._monitor, args=(interval,), daemon=True)
        self._memory_used = []
        self._start_time = datetime.datetime.now()
        self._process = psutil.Process()

    @property
    def memory_used(self):
        return self._memory_used

    def _monitor(self, interval):
        while not self._stop:
            mem = self._process.memory_info().rss
            elapsed_seconds = (datetime.datetime.now() - self._start_time).total_seconds()
            self._memory_used.append((elapsed_seconds, mem))
            #print(f"mem used after {elapsed_seconds:.1f} seconds: {to_mb(mem)} MB")
            time.sleep(interval)

    def __enter__(self):
        self._thread.start()
        return self

    def __exit__(self, type, value, traceback):
        self._stop = True
        self._thread.join(0.5)
        print(f"Min used memory: {to_mb(min([x[1] for x in self.memory_used]))} MB")
        print(f"Max used memory: {to_mb(max([x[1] for x in self.memory_used]))} MB")

import ansys.dpf.core as dpf
model = dpf.Model(r"file.rst")
with MemoryMonitor():
    stress_op = model.results.stress()
    stress_op.inputs.requested_location(dpf.locations.nodal)
    fields_container = stress_op.outputs.fields_container()

assert len(fields_container) == 1
print(f"End data memory consumption: {to_mb(fields_container[0].data.size*8) } MB")

Steps To Reproduce

Run the script above

Which Operating System causes the issue?

Windows

Which DPF/Ansys version are you using?

DPF Server 2024.2.pre1

Which Python version causes the issue?

3.10

Installed packages

annotated-types 0.7.0
ansys-dpf-composites 0.4.dev0
ansys-dpf-core 0.12.3.dev0
ansys-dpf-post 0.8.1.dev0
anyio 4.4.0
backoff 2.2.1
backports.tarfile 1.2.0
build 1.2.1
cachetools 5.3.3
certifi 2024.6.2
cfgv 3.4.0
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
contourpy 1.2.1
coverage 7.5.3
cx_Freeze 6.15.16
cx_Logging 3.2.0
cycler 0.12.1
distlib 0.3.8
dnspython 2.6.1
docutils 0.21.2
email_validator 2.1.1
exceptiongroup 1.2.1
fast_simplification 0.1.7
fastapi 0.111.0
fastapi-cli 0.0.4
filelock 3.15.1
fonttools 4.53.0
google-api-core 2.19.0
google-api-python-client 2.133.0
google-auth 2.30.0
google-auth-httplib2 0.2.0
googleapis-common-protos 1.63.1
grpcio 1.64.1
h11 0.14.0
httpcore 1.0.5
httplib2 0.22.0
httptools 0.6.1
httpx 0.27.0
humanize 4.9.0
identify 2.5.36
idna 3.7
importlib_metadata 7.1.0
iniconfig 2.0.0
jaraco.classes 3.4.0
jaraco.context 5.3.0
jaraco.functools 4.0.1
Jinja2 3.1.4
keyring 25.2.1
kiwisolver 1.4.5
lief 0.14.1
markdown-it-py 3.0.0
MarkupSafe 2.1.5
marshmallow 3.21.3
marshmallow-oneofschema 3.1.1
matplotlib 3.9.0
mdurl 0.1.2
more-itertools 10.3.0
nh3 0.2.17
nodeenv 1.9.1
numpy 1.26.4
orjson 3.10.5
packaging 24.1
pandas 2.2.2
pillow 10.3.0
pip 22.2.1
pkginfo 1.11.1
platformdirs 4.2.2
plotly 5.22.0
pluggy 1.5.0
pooch 1.8.2
portend 3.2.0
pre-commit 3.7.1
proto-plus 1.23.0
protobuf 4.25.3
psutil 5.9.8
pyasn1 0.6.0
pyasn1_modules 0.4.0
pydantic 2.7.4
pydantic_core 2.18.4
pydantic-settings 2.3.3
Pygments 2.18.0
PyJWT 2.8.0
pyparsing 3.1.2
pyproject_hooks 1.1.0
pytest 8.2.2
pytest-cov 5.0.0
pytest-rerunfailures 14.0
pytest-timeout 2.3.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.9
pytz 2024.1
pyvista 0.43.9
pywin32-ctypes 0.2.2
PyYAML 6.0.1
readme_renderer 43.0
requests 2.32.3
requests-toolbelt 1.0.0
rfc3986 2.0.0
rich 13.7.1
rsa 4.9
scooby 0.10.0
setuptools 63.2.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
starlette 0.37.2
tempora 5.5.1
tenacity 8.3.0
tomli 2.0.1
tqdm 4.66.4
twine 5.1.0
typer 0.12.3
typing_extensions 4.12.2
tzdata 2024.1
ujson 5.10.0
uritemplate 4.1.1
urllib3 2.2.1
uvicorn 0.30.1
virtualenv 20.26.2
vtk 9.3.0
watchfiles 0.22.0
websockets 12.0
wheel 0.43.0
zipp 3.19.2

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingserverissue on the server side

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions