-
-
Notifications
You must be signed in to change notification settings - Fork 28
manual retry API #507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
janbjorge
wants to merge
1
commit into
main
Choose a base branch
from
consolidate-manual-retry-docs
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
manual retry API #507
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -260,12 +260,16 @@ def build_install_query(self) -> str: | |||||||||||||
| priority INT NOT NULL, | ||||||||||||||
| entrypoint TEXT NOT NULL, | ||||||||||||||
| traceback JSONB DEFAULT NULL, | ||||||||||||||
| aggregated BOOLEAN DEFAULT FALSE | ||||||||||||||
| aggregated BOOLEAN DEFAULT FALSE, | ||||||||||||||
| payload BYTEA, | ||||||||||||||
| headers JSONB, | ||||||||||||||
| retried_as INTEGER | ||||||||||||||
| ); | ||||||||||||||
| CREATE INDEX {self.settings.queue_table_log}_not_aggregated ON {self.settings.queue_table_log} ((1)) WHERE not aggregated; | ||||||||||||||
| CREATE INDEX {self.settings.queue_table_log}_created ON {self.settings.queue_table_log} (created); | ||||||||||||||
| CREATE INDEX {self.settings.queue_table_log}_status ON {self.settings.queue_table_log} (status); | ||||||||||||||
| CREATE INDEX {self.settings.queue_table_log}_job_id_status ON {self.settings.queue_table_log} (job_id, created DESC); | ||||||||||||||
| CREATE INDEX {self.settings.queue_table_log}_retried_as ON {self.settings.queue_table_log} (retried_as) WHERE retried_as IS NOT NULL; | ||||||||||||||
|
|
||||||||||||||
| CREATE {durability_policy.statistics_table} TABLE {self.settings.statistics_table} ( | ||||||||||||||
| id SERIAL PRIMARY KEY, | ||||||||||||||
|
|
@@ -454,6 +458,11 @@ def build_upgrade_queries(self) -> Generator[str, None, None]: | |||||||||||||
| yield f"CREATE UNIQUE INDEX IF NOT EXISTS {self.settings.queue_table}_unique_dedupe_key ON {self.settings.queue_table} (dedupe_key) WHERE ((status IN ('queued', 'picked') AND dedupe_key IS NOT NULL));" # noqa | ||||||||||||||
| yield f"CREATE INDEX IF NOT EXISTS {self.settings.queue_table_log}_job_id_status ON {self.settings.queue_table_log} (job_id, created DESC);" # noqa: E501 | ||||||||||||||
| yield f"ALTER TABLE {self.settings.queue_table} ADD COLUMN IF NOT EXISTS headers JSONB;" # noqa: E501 | ||||||||||||||
| # Manual retry API: store payload and headers in log table for retry capability | ||||||||||||||
| yield f"ALTER TABLE {self.settings.queue_table_log} ADD COLUMN IF NOT EXISTS payload BYTEA;" # noqa: E501 | ||||||||||||||
| yield f"ALTER TABLE {self.settings.queue_table_log} ADD COLUMN IF NOT EXISTS headers JSONB;" # noqa: E501 | ||||||||||||||
| yield f"ALTER TABLE {self.settings.queue_table_log} ADD COLUMN IF NOT EXISTS retried_as INTEGER;" # noqa: E501 | ||||||||||||||
| yield f"CREATE INDEX IF NOT EXISTS {self.settings.queue_table_log}_retried_as ON {self.settings.queue_table_log} (retried_as) WHERE retried_as IS NOT NULL;" # noqa: E501 | ||||||||||||||
|
|
||||||||||||||
| def build_table_has_column_query(self) -> str: | ||||||||||||||
| """ | ||||||||||||||
|
|
@@ -896,16 +905,17 @@ def build_log_job_query(self) -> str: | |||||||||||||
| Constructs an SQL query that deletes specified jobs from the queue table | ||||||||||||||
| and inserts corresponding entries into the statistics (log) table. | ||||||||||||||
| It captures details such as priority, entrypoint, time in queue, | ||||||||||||||
| creation time, and final status. The query uses upsert logic to handle | ||||||||||||||
| conflicts and aggregate counts. | ||||||||||||||
| creation time, final status, payload, and headers. The query uses upsert | ||||||||||||||
| logic to handle conflicts and aggregate counts. Payload and headers are | ||||||||||||||
| preserved to support manual retry functionality. | ||||||||||||||
|
|
||||||||||||||
| Returns: | ||||||||||||||
| str: The SQL query string to log jobs. | ||||||||||||||
| """ | ||||||||||||||
| return f"""WITH deleted AS ( | ||||||||||||||
| DELETE FROM {self.settings.queue_table} | ||||||||||||||
| WHERE id = ANY($1::integer[]) | ||||||||||||||
| RETURNING id, entrypoint, priority | ||||||||||||||
| RETURNING id, entrypoint, priority, payload, headers | ||||||||||||||
| ), job_status AS ( | ||||||||||||||
| SELECT | ||||||||||||||
| UNNEST($1::integer[]) AS id, | ||||||||||||||
|
|
@@ -917,7 +927,9 @@ def build_log_job_query(self) -> str: | |||||||||||||
| job_status.status AS status, | ||||||||||||||
| job_status.traceback AS traceback, | ||||||||||||||
| deleted.entrypoint AS entrypoint, | ||||||||||||||
| deleted.priority AS priority | ||||||||||||||
| deleted.priority AS priority, | ||||||||||||||
| deleted.payload AS payload, | ||||||||||||||
| deleted.headers AS headers | ||||||||||||||
| FROM job_status | ||||||||||||||
| INNER JOIN deleted | ||||||||||||||
| ON deleted.id = job_status.id | ||||||||||||||
|
|
@@ -927,9 +939,11 @@ def build_log_job_query(self) -> str: | |||||||||||||
| status, | ||||||||||||||
| entrypoint, | ||||||||||||||
| priority, | ||||||||||||||
| traceback | ||||||||||||||
| traceback, | ||||||||||||||
| payload, | ||||||||||||||
| headers | ||||||||||||||
| ) | ||||||||||||||
| SELECT id, status, entrypoint, priority, traceback FROM merged | ||||||||||||||
| SELECT id, status, entrypoint, priority, traceback, payload, headers FROM merged | ||||||||||||||
| """ | ||||||||||||||
|
|
||||||||||||||
| def build_truncate_log_statistics_query(self) -> str: | ||||||||||||||
|
|
@@ -1006,6 +1020,87 @@ def build_delete_log_query(self) -> str: | |||||||||||||
| def build_fetch_log_query(self) -> str: | ||||||||||||||
| return f"SELECT * FROM {self.settings.queue_table_log}" | ||||||||||||||
|
|
||||||||||||||
| def build_get_failed_jobs_query(self) -> str: | ||||||||||||||
| """ | ||||||||||||||
| Generate SQL query to retrieve failed jobs from the log table. | ||||||||||||||
|
|
||||||||||||||
| Returns failed jobs (status='exception') that have not been retried yet, | ||||||||||||||
| with optional filtering by entrypoint. Results are ordered by id descending | ||||||||||||||
| (newest first) and support cursor-based pagination via after_id. | ||||||||||||||
|
|
||||||||||||||
| Returns: | ||||||||||||||
| str: The SQL query string for fetching failed jobs. | ||||||||||||||
| """ | ||||||||||||||
| return f""" | ||||||||||||||
| SELECT * FROM {self.settings.queue_table_log} | ||||||||||||||
| WHERE status = 'exception' | ||||||||||||||
| AND retried_as IS NULL | ||||||||||||||
| AND ($1::text[] IS NULL OR entrypoint = ANY($1)) | ||||||||||||||
| AND ($2::bigint IS NULL OR id < $2) | ||||||||||||||
| ORDER BY id DESC | ||||||||||||||
| LIMIT $3 | ||||||||||||||
| """ | ||||||||||||||
|
|
||||||||||||||
| def build_get_log_entry_query(self) -> str: | ||||||||||||||
| """ | ||||||||||||||
| Generate SQL query to retrieve a specific log entry by ID. | ||||||||||||||
|
|
||||||||||||||
| Returns: | ||||||||||||||
| str: The SQL query string for fetching a log entry. | ||||||||||||||
| """ | ||||||||||||||
| return f""" | ||||||||||||||
| SELECT * FROM {self.settings.queue_table_log} | ||||||||||||||
|
Comment on lines
+1051
to
+1052
|
||||||||||||||
| return f""" | |
| SELECT * FROM {self.settings.queue_table_log} | |
| # Explicitly list columns to avoid SELECT * | |
| return f""" | |
| SELECT id, entrypoint, status, retried_as, priority, payload, headers, created_at, updated_at | |
| FROM {self.settings.queue_table_log} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using
SELECT *in queries can lead to maintenance issues if columns are added to the table. Consider explicitly listing the columns needed to avoid potential issues with column order changes or unexpected data being returned.