Description
pynetbox version
v7.4.1
NetBox version
v4.1.7
Feature type
Change to existing functionality
Proposed functionality
Hi,
We've built a caching layer to Pynetbox where responses are stored as json inside a Redis cache, then Record objects are reconstructed from there on cache hit.
However I've noticed that the _init_
of a Record
object can still take a substantial amount of time due to the various processing steps
As a first step I have:
- simplified the
_endpoint_from_url
method so that it doesn't needurlsplit
leading to a 50% speed improvement of that method - used marshall dump/load instead of deep_copy for cache initialization
- refactored the
_parse_values
method to limit the number of instance type checking and number of key/value that need further processing
All in all, according to a few benchmarks I made, the whole Record init shows more than 50% speed improvement. Below is an example when loading 14407 records.
Benchmark using cProfile with Pynetbox v7.4.1
ncalls tottime percall cumtime percall filename:lineno(function)
142153/14407 0.829 0.000 8.681 0.001 .venv/lib/python3.10/site-packages/pynetbox/core/response.py:278(__init__)
Benchmark using cProfile with our fork
ncalls tottime percall cumtime percall filename:lineno(function)
142153/14407 0.343 0.000 4.037 0.000 .venv/lib/python3.10/site-packages/pynetbox/core/response.py:263(__init__)
Would you be interested in having any/all of these things merged into upstream?
Use case
General speed improvement of the library
External dependencies
None