Conversation
- Expand Manual Retry API section in pgqueuer.md with full API reference - Include all methods: get_failed_jobs, get_log_entry, retry_failed_job, retry_failed_jobs - Add error details access and best practices - Remove separate manual-retry.md page - Update index.rst toctree
There was a problem hiding this comment.
Pull request overview
This PR introduces a Manual Retry API that enables retrieving failed jobs from the log table and re-enqueueing them on demand. This is designed for scenarios where jobs should execute once without automatic retries, with the client deciding whether to retry based on business logic or human review.
Key Changes:
- Added database schema columns (
payload,headers,retried_as) to the log table to support retry functionality - Implemented new query methods:
get_failed_jobs(),get_log_entry(),retry_failed_job(), andretry_failed_jobs() - Added comprehensive test coverage for the manual retry functionality including pagination, filtering, and error handling
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| test/test_queries.py | Updated test fixture to include new log table columns (id, payload, headers, retried_as) |
| test/test_manual_retry.py | Added comprehensive test suite for manual retry API functionality |
| pgqueuer/queries.py | Implemented four new methods for retrieving and retrying failed jobs |
| pgqueuer/qb.py | Added SQL queries for manual retry operations and updated schema with new columns |
| pgqueuer/models.py | Extended Log model with id, payload, headers, and retried_as fields |
| docs/pgqueuer.md | Added documentation section explaining manual retry API usage and best practices |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| result = [] | ||
| for log_id in log_ids: | ||
| job_id = await self.retry_failed_job( | ||
| log_id, | ||
| priority=priority, | ||
| execute_after=execute_after, | ||
| ) | ||
| if job_id is not None: | ||
| result.append(job_id) | ||
| return result |
There was a problem hiding this comment.
The retry_failed_jobs method performs sequential database operations in a loop. For bulk retry operations, this could lead to poor performance. Consider using asyncio.gather() or a single SQL query with UNNEST() to process all retries in parallel or as a single batch operation.
| result = [] | |
| for log_id in log_ids: | |
| job_id = await self.retry_failed_job( | |
| log_id, | |
| priority=priority, | |
| execute_after=execute_after, | |
| ) | |
| if job_id is not None: | |
| result.append(job_id) | |
| return result | |
| tasks = [ | |
| self.retry_failed_job( | |
| log_id, | |
| priority=priority, | |
| execute_after=execute_after, | |
| ) | |
| for log_id in log_ids | |
| ] | |
| results = await asyncio.gather(*tasks) | |
| return [job_id for job_id in results if job_id is not None] |
| str: The SQL query string for fetching failed jobs. | ||
| """ | ||
| return f""" | ||
| SELECT * FROM {self.settings.queue_table_log} |
There was a problem hiding this comment.
Using SELECT * in queries can lead to maintenance issues if columns are added to the table. Consider explicitly listing the columns needed to avoid potential issues with column order changes or unexpected data being returned.
| return f""" | ||
| SELECT * FROM {self.settings.queue_table_log} |
There was a problem hiding this comment.
Using SELECT * in queries can lead to maintenance issues if columns are added to the table. Consider explicitly listing the columns needed to avoid potential issues with column order changes or unexpected data being returned.
| return f""" | |
| SELECT * FROM {self.settings.queue_table_log} | |
| # Explicitly list columns to avoid SELECT * | |
| return f""" | |
| SELECT id, entrypoint, status, retried_as, priority, payload, headers, created_at, updated_at | |
| FROM {self.settings.queue_table_log} |
Adds a Manual Retry API that allows retrieving failed jobs from the log table and re-enqueueing them on demand. This is useful when jobs should execute once and the client decides whether to retry.
Closes #506
Example