Skip to content

Conversation

@gagantrivedi
Copy link
Member

@gagantrivedi gagantrivedi commented Jun 5, 2025

Add TASK_PROCESSOR_MANUAL_MODE, which can be used in tests that depend on the task processor by manually calling run_tasks.

Usage example: https://github.com/Flagsmith/flagsmith-private/pull/29/files#diff-d390474f304862c0fc4fd05287524aa5702182dccec6db49ff58fab474996630R240


ctx = ExitStack()
timer = metrics.flagsmith_task_processor_task_duration_seconds.time()
ctx.enter_context(timer)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to do this because this is evaluated at import time as a result flagsmith_task_processor_task_duration_seconds is not part of the metrics

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe reload_metrics should solve this? See usage

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That still feels like a hack to me. I want this to be a first-class feature so that anyone updating or changing the task processor code is aware of this use case as well

@khvn26
Copy link
Member

khvn26 commented Jun 5, 2025

There are currently two modes that affect the runtime behaviour of the task processor: TASK_PROCESSOR_MODE=True and TASK_PROCESSOR_MODE=False. Please note that, despite this being a Django setting, it cannot be set directly by the user as it is supposed to be set by the task processor entrypoint script.

While I understand the rationale behind adding a new TASK_PROCESSOR_MANUAL_MODE, I am not in favour of adding new behaviour to the runtime that is specific to tests. Although customisations on top of the original code may seem like a hack, I believe that the approach chosen by the task_processor_mode marker/fixture is less error-prone, as it maintains a clearer distinction between runtime code and test code.

My suggestion:

  1. Move the task_processor_mode marker/fixture to the test_tools plugin.
  2. In the plugin, expose a run_tasks: Callable[int], list[TaskRun] fixture that depends on the above fixture and sets the appropriate TASK_RUN_MODE.
  3. In the main app, add a check for the PYTEST_CURRENT_TEST environment variable and add all the URLs if it's present. I realise this contradicts my point above, but at least it affects import time rather than runtime, and it's only in one place.

@gagantrivedi
Copy link
Member Author

There are currently two modes that affect the runtime behaviour of the task processor: TASK_PROCESSOR_MODE=True and TASK_PROCESSOR_MODE=False. Please note that, despite this being a Django setting, it cannot be set directly by the user as it is supposed to be set by the task processor entrypoint script.

That's not entirely true, though? We do have DOCGEN_MODE ?

@khvn26
Copy link
Member

khvn26 commented Jun 5, 2025

@gagantrivedi The DOCGEN_MODE setting only affects import time. Its purpose is to ensure that all Prometheus metrics are advertised when the docgen command is run.

@gagantrivedi
Copy link
Member Author

gagantrivedi commented Jun 5, 2025

@khvn26 I get your point. I'd argue that a manual mode intuitively makes sense for the task processor compared to all the fixture/patching stuff. IMO, run_tasks shouldn't really care about whether it's being called with TASK_PROCESSOR_MODE or not

Digging into this general idea of adding features just to make developers' lives easier (even if unrelated to the main goal), I found that quite a few libraries do that.

@khvn26
Copy link
Member

khvn26 commented Jun 5, 2025

Digging into this general idea of adding features just to make developers' lives easier (even if unrelated to the main goal), I found that quite a few libraries do that.

This is why I'm inclined to expose a fixture with a clear purpose and a use case rather than another, arguably vague, Django setting.

@emyller
Copy link
Contributor

emyller commented Jun 5, 2025

I understand the value added in this patch, though I'm inclined to agree with @khvn26 on the design side, as I also don't favor implementing runtime code only to support tests. I don't like hacking through import time either, if I can possible look elsewhere for a better design decision that might favor both the app and tests.

Though I'm unsure if this is the only pain point solved by this PR, I suggest adding a new setting ENABLE_API_URLS = not TASK_PROCESSOR_MODE (default) — it sounds specific and can be changed within task processor tests to forcefully enable URLs.

Note that changing this new setting within a test might not immediately enable URLs because the urlconf module has already been loaded. If that proves true, I believe pytest-django's urls marker might somehow help.

@khvn26
Copy link
Member

khvn26 commented Jun 5, 2025

adding a new setting ENABLE_API_URLS = not TASK_PROCESSOR_MODE

Great idea 👍 This could be extended to ENABLE_API_URLS = (not TASK_PROCESSOR_MODE) or "PYTEST_CURRENT_TEST" in os.environ if we proceed with my suggestions.

changing this new setting within a test might not immediately enable URLs because the module has already been loaded

I haven't personally checked this, but according to @gagantrivedi's experience, the URL configs are reloaded whenever the settings: SettingsWrapper fixture is used, which is why this PR exists.

pytest-django's urls marker

Nice find, and potentially the cleanest solution yet 👍 Having different urlconf modules for API and Task processor sounds nicer for runtime purposes as well.

@Zaimwa9
Copy link
Contributor

Zaimwa9 commented Jun 5, 2025

Just to rephrase the problem from where I stand:

The issue is that setting TASK_PROCESSOR_MODE = True causes Django to skip URL registration, which breaks integration tests that rely on hitting API endpoints.
So we want:

  • Recurring tasks not registered
  • API endpoints mounted
  • Being able to call run_tasks (blocked by TASK_PROCESSOR_MODE = False)

I'm somewhere in between. I agree that being able to run tasks manually in tests is a great tool to speed up development and confidence, but I also lean towards @khvn26 concern about introducing test-specific behavior into runtime code.
Not sure if we have explored all the solutions yet but i'd be happy to provide a second (third) pair of eyes

@matthewelwell
Copy link
Contributor

I'd say that my standpoint is that I definitely align most with this sentiment:

I am not in favour of adding new behaviour to the runtime that is specific to tests

That being said, in general, the solution in this PR is pretty light on the runtime interference. I think it's hard to make a decision here without seeing a PoC for @khvn26's suggestion as I'm having a hard time envisaging it without anything tangible.

@gagantrivedi gagantrivedi deleted the feat/manual-mode branch October 31, 2025 06:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants