Skip to content

Commit a24a7fc

Browse files
committed
Address PR review comments on production.md
- Move Redis data structures section lower as implementation details - Fix metrics formatting from bold to proper headings - Replace OpenTelemetry code example with documentation link - Remove rolling updates section
1 parent c36f50a commit a24a7fc

File tree

3 files changed

+118
-205
lines changed

3 files changed

+118
-205
lines changed

docs/getting-started.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -187,10 +187,10 @@ You now know the core concepts: creating dockets, scheduling work with idempoten
187187

188188
Ready for more? Check out:
189189

190-
- **Dependencies Guide** - Access current docket, advanced retry patterns, timeouts, and custom dependencies
191-
- **Testing with Docket** - Ergonomic testing utilities for unit and integration tests
192-
- **Advanced Task Patterns** - Perpetual tasks, striking/restoring, logging, and task chains
193-
- **Docket in Production** - Redis architecture, monitoring, and deployment best practices
190+
- **[Dependencies Guide](dependencies.md)** - Access current docket, advanced retry patterns, timeouts, and custom dependencies
191+
- **[Testing with Docket](testing.md)** - Ergonomic testing utilities for unit and integration tests
192+
- **[Advanced Task Patterns](advanced-patterns.md)** - Perpetual tasks, striking/restoring, logging, and task chains
193+
- **[Docket in Production](production.md)** - Redis architecture, monitoring, and deployment best practices
194194
- **[API Reference](api-reference.md)** - Complete documentation of all classes and methods
195195

196196
## A Note on Security

docs/production.md

Lines changed: 15 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,6 @@ Running Docket at scale requires understanding its Redis-based architecture, con
66

77
Docket uses Redis streams and sorted sets to provide reliable task delivery with at-least-once semantics.
88

9-
### Data Storage Model
10-
11-
Docket creates several Redis data structures for each docket:
12-
13-
- **Stream (`{docket}:stream`)**: Ready-to-execute tasks using Redis consumer groups
14-
- **Sorted Set (`{docket}:queue`)**: Future tasks ordered by scheduled execution time
15-
- **Hashes (`{docket}:{key}`)**: Serialized task data for scheduled tasks
16-
- **Set (`{docket}:workers`)**: Active worker heartbeats with timestamps
17-
- **Set (`{docket}:worker-tasks:{worker}`)**: Tasks each worker can execute
18-
- **Stream (`{docket}:strikes`)**: Strike/restore commands for operational control
19-
209
### Task Lifecycle
2110

2211
Understanding how tasks flow through the system helps with monitoring and troubleshooting:
@@ -203,19 +192,19 @@ docket worker --metrics-port 9090
203192

204193
Available metrics include:
205194

206-
**Task Counters:**
195+
#### Task Counters
207196
- `docket_tasks_added` - Tasks scheduled
208197
- `docket_tasks_started` - Tasks begun execution
209198
- `docket_tasks_succeeded` - Successfully completed tasks
210199
- `docket_tasks_failed` - Failed tasks
211200
- `docket_tasks_retried` - Retry attempts
212201
- `docket_tasks_stricken` - Tasks blocked by strikes
213202

214-
**Task Timing:**
203+
#### Task Timing
215204
- `docket_task_duration` - Histogram of task execution times
216205
- `docket_task_punctuality` - How close tasks run to their scheduled time
217206

218-
**System Health:**
207+
#### System Health
219208
- `docket_queue_depth` - Tasks ready for immediate execution
220209
- `docket_schedule_depth` - Tasks scheduled for future execution
221210
- `docket_tasks_running` - Currently executing tasks
@@ -224,6 +213,17 @@ Available metrics include:
224213

225214
All metrics include labels for docket name, worker name, and task function name.
226215

216+
### Redis Data Structures
217+
218+
Docket creates several Redis data structures for each docket:
219+
220+
- **Stream (`{docket}:stream`)**: Ready-to-execute tasks using Redis consumer groups
221+
- **Sorted Set (`{docket}:queue`)**: Future tasks ordered by scheduled execution time
222+
- **Hashes (`{docket}:{key}`)**: Serialized task data for scheduled tasks
223+
- **Set (`{docket}:workers`)**: Active worker heartbeats with timestamps
224+
- **Set (`{docket}:worker-tasks:{worker}`)**: Tasks each worker can execute
225+
- **Stream (`{docket}:strikes`)**: Strike/restore commands for operational control
226+
227227
### Health Checks
228228

229229
Enable health check endpoints:
@@ -243,23 +243,7 @@ Docket automatically creates OpenTelemetry spans for task execution:
243243
- **Status**: Success/failure with error details
244244
- **Duration**: Complete task execution time
245245

246-
Configure your OpenTelemetry exporter to send traces to your observability platform:
247-
248-
```python
249-
from opentelemetry import trace
250-
from opentelemetry.exporter.jaeger.thrift import JaegerExporter
251-
from opentelemetry.sdk.trace import TracerProvider
252-
from opentelemetry.sdk.trace.export import BatchSpanProcessor
253-
254-
# Configure tracing before creating workers
255-
trace.set_tracer_provider(TracerProvider())
256-
jaeger_exporter = JaegerExporter(
257-
agent_host_name="jaeger",
258-
agent_port=6831,
259-
)
260-
span_processor = BatchSpanProcessor(jaeger_exporter)
261-
trace.get_tracer_provider().add_span_processor(span_processor)
262-
```
246+
Configure your OpenTelemetry exporter to send traces to your observability platform. See the [OpenTelemetry Python documentation](https://opentelemetry.io/docs/languages/python/) for configuration examples with various backends like Jaeger, Zipkin, or cloud providers.
263247

264248
### Structured Logging
265249

@@ -329,12 +313,6 @@ docket strike old_task_function
329313
# Scale down old workers after tasks drain
330314
```
331315

332-
**Rolling updates:**
333-
```bash
334-
# Update worker configuration gradually
335-
# Workers automatically reconnect and pick up new tasks
336-
```
337-
338316
### Error Handling
339317

340318
**Configure appropriate retries:**

0 commit comments

Comments
 (0)