-
-
Notifications
You must be signed in to change notification settings - Fork 795
Reuse Configuration instance in Locales #3913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
lonvia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constructor remains forgiving by allowing config=None for call sites where a configuration object is not available in context for example.
Looking at the line you cite here:
Nominatim/src/nominatim_api/v1/format.py
Line 135 in 5e965d5
| locales = options.get('locales', Locales()) |
I notice it was badly written. Locale() will always be instantiated here, even when it is available in the options dict. This should be: locales = options.get('locales') or Locales().
And I don't think the code path where Locales() is instantiated is actually reachable afterwards.
lonvia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind squashing together your commits into a single one. Then this would be ready to merge.
expect output names as argument and avoid redundant configuration initialization
a99953d to
4fd6162
Compare
|
Thank you! |
Closes #3909
This PR rewires the
Localesconstructor to accept an existingConfigurationinstance via aconfig=parameter, instead of instantiating a new (empty) configuration on each call.The constructor remains forgiving by allowing
config=Nonefor call sites where a configuration object is not available in context for example.By reusing a single
Configurationinstance where possible, this avoids repeated configuration loading and significantly reduces per-request overhead by over 25-30% per request.