Skip to content

Commit 246a951

Browse files
authored
[WIP] Docs (#121)
1 parent bbfb4e3 commit 246a951

File tree

12 files changed

+1125
-271
lines changed

12 files changed

+1125
-271
lines changed

.github/workflows/docs.yml

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
name: Documentation
2+
3+
on:
4+
push:
5+
branches:
6+
- main
7+
pull_request:
8+
branches:
9+
- main
10+
11+
permissions:
12+
contents: read
13+
pages: write
14+
id-token: write
15+
pull-requests: write
16+
17+
concurrency:
18+
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
19+
cancel-in-progress: true
20+
21+
jobs:
22+
build-and-deploy:
23+
runs-on: ubuntu-latest
24+
steps:
25+
- uses: actions/checkout@v4
26+
with:
27+
fetch-depth: 0
28+
29+
- name: Install uv and set Python version
30+
uses: astral-sh/setup-uv@v5
31+
with:
32+
python-version: "3.12"
33+
enable-cache: true
34+
cache-dependency-glob: "pyproject.toml"
35+
36+
- name: Install dependencies
37+
run: uv sync --group docs
38+
39+
- name: Configure preview path
40+
if: github.event_name == 'pull_request'
41+
id: preview
42+
run: |
43+
PR_NUMBER=$(jq --raw-output .pull_request.number "$GITHUB_EVENT_PATH")
44+
echo "path=pr-preview/pr-${PR_NUMBER}" >> $GITHUB_OUTPUT
45+
46+
- name: Build preview
47+
if: github.event_name == 'pull_request'
48+
run: |
49+
# Update mkdocs config with the preview URL
50+
echo "site_url: https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }}/${{ steps.preview.outputs.path }}" >> mkdocs.yml
51+
52+
# Build to the PR-specific directory
53+
uv run mkdocs build -d "site/${{ steps.preview.outputs.path }}"
54+
55+
- name: Build site
56+
if: github.event_name != 'pull_request'
57+
run: uv run mkdocs build
58+
59+
- name: Upload Pages artifact
60+
uses: actions/upload-pages-artifact@v3
61+
with:
62+
path: site
63+
64+
- name: Deploy to GitHub Pages
65+
id: deployment
66+
uses: actions/deploy-pages@v4
67+
68+
- name: Add or Update Comment
69+
if: github.event_name == 'pull_request' && github.event.action != 'closed'
70+
uses: marocchino/sticky-pull-request-comment@v2
71+
with:
72+
header: preview
73+
message: |
74+
📚 Documentation preview for this PR is ready!
75+
76+
You can view it at: ${{ steps.deployment.outputs.page_url }}${{ steps.preview.outputs.path }}/

README.md

Lines changed: 8 additions & 259 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,10 @@ Hello, Jane at 2025-03-05 13:58:21.552644!
4040
Howdy, John at 2025-03-05 13:58:24.550773!
4141
```
4242

43+
Check out our docs for more [details](http://chrisguidry.github.io/docket/),
44+
[examples](https://chrisguidry.github.io/docket/getting-started/), and the [API
45+
reference](https://chrisguidry.github.io/docket/api-reference/).
46+
4347
## Why `docket`?
4448

4549
⚡️ Snappy one-way background task processing without any bloat
@@ -52,11 +56,10 @@ Howdy, John at 2025-03-05 13:58:24.550773!
5256

5357
🧩 Fully type-complete and type-aware for your background task functions
5458

55-
5659
## Installing `docket`
5760

5861
Docket is [available on PyPI](https://pypi.org/project/pydocket/) under the package name
59-
`pydocket`. It targets Python 3.12 or above.
62+
`pydocket`. It targets Python 3.12 or above.
6063

6164
With [`uv`](https://docs.astral.sh/uv/):
6265

@@ -75,261 +78,7 @@ pip install pydocket
7578
```
7679

7780
Docket requires a [Redis](http://redis.io/) server with Streams support (which was
78-
introduced in Redis 5.0.0). Docket is tested with Redis 7.
79-
80-
81-
## Creating a `Docket`
82-
83-
Each `Docket` should have a name that will be shared across your system, like the name
84-
of a topic or queue. By default this is `"docket"`. You can support many separate
85-
dockets on a single Redis server as long as they have different names.
86-
87-
Docket accepts a URL to connect to the Redis server (defaulting to the local
88-
server), and you can pass any additional connection configuration you need on that
89-
connection URL.
90-
91-
```python
92-
async with Docket(name="orders", url="redis://my-redis:6379/0") as docket:
93-
...
94-
```
95-
96-
The `name` and `url` together represent a single shared docket of work across all your
97-
system.
98-
99-
100-
## Scheduling work
101-
102-
A `Docket` is the entrypoint to scheduling immediate and future work. You define work
103-
in the form of `async` functions that return `None`. These task functions can accept
104-
any parameter types, so long as they can be serialized with
105-
[`cloudpickle`](https://github.com/cloudpipe/cloudpickle).
106-
107-
```python
108-
def now() -> datetime:
109-
return datetime.now(timezone.utc)
110-
111-
async def send_welcome_email(customer_id: int, name: str) -> None:
112-
...
113-
114-
async def send_followup_email(customer_id: int, name: str) -> None:
115-
...
116-
117-
async with Docket() as docket:
118-
await docket.add(send_welcome_email)(12345, "Jane Smith")
119-
120-
tomorrow = now() + timedelta(days=1)
121-
await docket.add(send_followup_email, when=tomorrow)(12345, "Jane Smith")
122-
```
123-
124-
`docket.add` schedules both immediate work (the default) or future work (with the
125-
`when: datetime` parameter).
126-
127-
All task executions are identified with a `key` that captures the unique essence of that
128-
piece of work. By default they are randomly assigned UUIDs, but assigning your own keys
129-
unlocks many powerful capabilities.
130-
131-
```python
132-
async with Docket() as docket:
133-
await docket.add(send_welcome_email)(12345, "Jane Smith")
134-
135-
tomorrow = now() + timedelta(days=1)
136-
key = "welcome-email-for-12345"
137-
await docket.add(send_followup_email, when=tomorrow, key=key)(12345, "Jane Smith")
138-
```
139-
140-
If you've given your future work a `key`, then only one unique instance of that
141-
execution will exist in the future:
142-
143-
```python
144-
key = "welcome-email-for-12345"
145-
await docket.add(send_followup_email, when=tomorrow, key=key)(12345, "Jane Smith")
146-
```
147-
148-
Calling `.add` a second time with the same key won't do anything, so luckily your
149-
customer won't get two emails!
150-
151-
However, at any time later you can replace that task execution to alter _when_ it will
152-
happen:
153-
154-
```python
155-
key = "welcome-email-for-12345"
156-
next_week = now() + timedelta(days=7)
157-
await docket.replace(send_followup_email, when=next_week, key=key)(12345, "Jane Smith")
158-
```
159-
160-
_what arguments_ will be passed:
161-
162-
```python
163-
key = "welcome-email-for-12345"
164-
await docket.replace(send_followup_email, when=tomorrow, key=key)(12345, "Jane Q. Smith")
165-
```
166-
167-
Or just cancel it outright:
168-
169-
```python
170-
await docket.cancel("welcome-email-for-12345")
171-
```
172-
173-
Tasks may also be called by name, in cases where you can't or don't want to import the
174-
module that has your tasks. This may be common in a distributed environment where the
175-
code of your task system just isn't available, or it requires heavyweight libraries that
176-
you wouldn't want to import into your web server. In this case, you will lose the
177-
type-checking for `.add` and `.replace` calls, but otherwise everything will work as
178-
it does with the actual function:
179-
180-
```python
181-
await docket.add("send_followup_email", when=tomorrow)(12345, "Jane Smith")
182-
```
183-
184-
These primitives of `.add`, `.replace`, and `.cancel` are sufficient to build a
185-
large-scale and robust system of background tasks for your application.
186-
187-
## Writing tasks
188-
189-
Tasks are any `async` function that takes `cloudpickle`-able parameters, and returns
190-
`None`. Returning `None` is a strong signal that these are _fire-and-forget_ tasks
191-
whose results aren't used or waited-on by your application. These are the only kinds of
192-
tasks that Docket supports.
193-
194-
Docket uses a parameter-based dependency and configuration pattern, which has become
195-
common in frameworks like [FastAPI](https://fastapi.tiangolo.com/),
196-
[Typer](https://typer.tiangolo.com/), or [FastMCP](https://github.com/jlowin/fastmcp).
197-
As such, there is no decorator for tasks.
198-
199-
A very common requirement for tasks is that they have access to schedule further work
200-
on their own docket, especially for chains of self-perpetuating tasks to implement
201-
distributed polling and other periodic systems. One of the first dependencies you may
202-
look for is the `CurrentDocket`:
203-
204-
```python
205-
from docket import Docket, CurrentDocket
206-
207-
POLLING_INTERVAL = timedelta(seconds=10)
208-
209-
async def poll_for_changes(file: Path, docket: Docket = CurrentDocket()) -> None:
210-
if file.exists():
211-
...do something interesting...
212-
return
213-
else:
214-
await docket.add(poll_for_changes, when=now() + POLLING_INTERVAL)(file)
215-
```
216-
217-
Here the argument to `docket` is an instance of `Docket` with the same name and URL as
218-
the worker it's running on. You can ask for the `CurrentWorker` and `CurrentExecution`
219-
as well. Many times it could be useful to have your own task `key` available in order
220-
to idempotently schedule future work:
221-
222-
```python
223-
from docket import Docket, CurrentDocket, TaskKey
224-
225-
async def poll_for_changes(
226-
file: Path,
227-
key: str = TaskKey(),
228-
docket: Docket = CurrentDocket()
229-
) -> None:
230-
if file.exists():
231-
...do something interesting...
232-
return
233-
else:
234-
await docket.add(poll_for_changes, when=now() + POLLING_INTERVAL, key=key)(file)
235-
```
236-
237-
This helps to ensure that there is one continuous "chain" of these future tasks, as they
238-
all use the same key.
239-
240-
Configuring the retry behavior for a task is also done with a dependency:
241-
242-
```python
243-
from datetime import timedelta
244-
from docket import Retry
245-
246-
async def faily(retry: Retry = Retry(attempts=5, delay=timedelta(seconds=3))):
247-
if retry.attempt == 4:
248-
print("whew!")
249-
return
250-
251-
raise ValueError("whoops!")
252-
```
253-
254-
In this case, the task `faily` will run 4 times with a delay of 3 seconds between each
255-
attempt. If it were to get to 5 attempts, no more would be attempted. This is a
256-
linear retry, and an `ExponentialRetry` is also available:
257-
258-
```python
259-
from datetime import timedelta
260-
from docket import Retry, ExponentialRetry
261-
262-
263-
async def faily(
264-
retry: Retry = Retry(
265-
attempts=5,
266-
minimum_delay=timedelta(seconds=2),
267-
maximum_delay=timedelta(seconds=32),
268-
),
269-
):
270-
if retry.attempt == 4:
271-
print("whew!")
272-
return
273-
274-
raise ValueError("whoops!")
275-
```
276-
277-
This would retry in 2, 4, 8, then 16 seconds before that fourth attempt succeeded.
278-
279-
280-
## Running workers
281-
282-
You can run as many workers as you like to process the tasks on your docket. You can
283-
either run a worker programmatically in Python, or via the CLI. Clients using docket
284-
have the advantage that they are usually passing the task functions, but workers don't
285-
necessarily know which tasks they are supposed to run. Docket solves this by allowing
286-
you to explicitly register tasks.
287-
288-
In `my_tasks.py`:
289-
290-
```python
291-
async def my_first_task():
292-
...
293-
294-
async def my_second_task():
295-
...
296-
297-
my_task_collection = [
298-
my_first_task,
299-
my_second_task,
300-
]
301-
```
302-
303-
From Python:
304-
305-
```python
306-
from my_tasks import my_task_collection
307-
308-
async with Docket() as docket:
309-
for task in my_task_collection:
310-
docket.register(task)
311-
312-
async with Worker(docket) as worker:
313-
await worker.run_forever()
314-
```
315-
316-
From the CLI:
317-
318-
```bash
319-
docket worker --tasks my_tasks:my_task_collection
320-
```
321-
322-
By default, workers will process up to 10 tasks concurrently, but you can adjust this
323-
to your needs with the `concurrency=` keyword argument or the `--concurrency` CLI
324-
option.
325-
326-
When a worker crashes ungracefully, any tasks it was currently executing will be held
327-
for a period of time before being redelivered to other workers. You can control this
328-
time period with `redelivery_timeout=` or `--redelivery-timeout`. You'd want to set
329-
this to a value higher than the longest task you expect to run. For queues of very fast
330-
tasks, a few seconds may be ideal; for long data-processing steps involving large
331-
amount of data, you may need minutes.
332-
81+
introduced in Redis 5.0.0). Docket is tested with Redis 6 and 7.
33382

33483
# Hacking on `docket`
33584

@@ -346,8 +95,8 @@ The to run the test suite:
34695
pytest
34796
```
34897

349-
We aim to main 100% test coverage, which is required for all PRs to `docket`. We
98+
We aim to main 100% test coverage, which is required for all PRs to `docket`. We
35099
believe that `docket` should stay small, simple, understandable, and reliable, and that
351-
begins with testing all the dusty branches and corners. This will give us the
100+
begins with testing all the dusty branches and corners. This will give us the
352101
confidence to upgrade dependencies quickly and to adapt to new versions of Redis over
353102
time.

docs/api-reference.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# API Reference
2+
3+
::: docket

0 commit comments

Comments
 (0)