Skip to content

fix[cache]: replace double dict cache logic#532

Merged
peterdudfield merged 3 commits intomainfrom
fix/cache-ttl
Feb 21, 2026
Merged

fix[cache]: replace double dict cache logic#532
peterdudfield merged 3 commits intomainfrom
fix/cache-ttl

Conversation

@braddf
Copy link
Contributor

@braddf braddf commented Feb 20, 2026

Pull Request

Description

Simplify caching logic by using standard cache with TTL (time-to-live) functionality to handle expiry of cached route responses.

Future notes (for Quartz API, if applicable):

  • This in-memory cache still doesn't serve our multi-instance deployment well in production, Redis or similar would be a better solution, but we might be able to scale back to 1 instance in future anyway in the short- to medium-term.
  • I'm not convinced about the "default" behaviour of this middleware being the cached data, intuitively I think we should return early if the cached data is found, and then fall back to requesting the data if it's not. Just semantics though.
  • Overall, I'm still of the mind that this should be using a well-structured caching library instead of rolling our own, unless we need to do special things that common OS libraries can't – convo for future!

How Has This Been Tested?

  • Locally against dev DB, manual API requests and automated using Postman Runner with 20 virtual users making continuous concurrent requests over span of 5 minutes.

If your changes affect data processing, have you plotted any changes? i.e. have you done a quick sanity check?

  • Yes

Checklist:

  • My code follows OCF's coding style guidelines
  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • I have checked my code and corrected any misspellings

@braddf
Copy link
Contributor Author

braddf commented Feb 20, 2026

Something flakey in the tests due to the cache persisting I think, might want to put some explicit logic in to clear this between test runs, as the monotonic timer that TTLCache uses will be affected by the freezegun functions, which I think actually makes everything more consistent, but throws up things like this!
If we expose the cache and currently_running instances on the returned wrapper, we can clear them in a pytest.fixture to ensure this is clean between tests.

@braddf braddf requested a review from peterdudfield February 20, 2026 18:12
Copy link
Collaborator

@peterdudfield peterdudfield left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice one! I tested it locally too and it does seem to keep the memory much lower.
(I dont really know how, but I think I'm ok with that)

@peterdudfield peterdudfield merged commit f710d5c into main Feb 21, 2026
2 checks passed
@peterdudfield peterdudfield deleted the fix/cache-ttl branch February 21, 2026 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants