Skip to content

Commit 755c9b2

Browse files
committed
Updated README. Removed psql example, since currrently the postq process will crash if there is bad data in the postq.job table -- data validation during insert is an important part of the process until we have error trapping on select.
1 parent 6b15469 commit 755c9b2

File tree

1 file changed

+38
-61
lines changed

1 file changed

+38
-61
lines changed

README.md

+38-61
Original file line numberDiff line numberDiff line change
@@ -48,70 +48,47 @@ PostQ is a job queue system with
4848
docker-compose up
4949
```
5050
The default docker-compose.yml cluster definition uses the docker executor (so tasks must define an image) with a maximum queue sleep time of 5 seconds and the default qname=''. Note that the default cluster doesn't expose any ports to the outside world, but you can for example shell into the running cluster (using a second terminal) and start pushing tasks into the queue. Or, the more common case is that your PostgreSQL instance is available inside your application cluster, so you can push jobs into postq directly from your application.
51-
52-
Here is an example in Python using [Databases](https://encode.io/databases), [SQL Alchemy Core](https://docs.sqlalchemy.org/en/13/core/), and data models written in [Pydantic](https://pydantic-docs.helpmanual.io/):
53-
```bash
54-
$ docker-compose exec postq ipython
55-
```
56-
```python
57-
# (Using the ipython shell, which allows async/await without an explicit event loop.)
58-
import os
59-
from databases import Database
60-
from postq import models, tables
61-
62-
database = Database(os.getenv('DATABASE_URL'))
63-
await database.connect()
64-
job = models.Job(
65-
tasks= {'a': {'image': 'debian:buster-slim', 'command': 'ls -laFh'}}
66-
)
67-
record = await database.fetch_one(
68-
tables.Job.insert().returning(*tables.Job.columns), values=job.dict()
69-
)
70-
71-
# Then, after a few seconds...
72-
73-
joblog = models.Job(
74-
**await database.fetch_one(
75-
tables.JobLog.select().where(
76-
tables.JobLog.columns.id==record['id']
77-
).limit(1)
78-
)
79-
)
8051
81-
print(joblog.tasks[0].results)
52+
<!-- * [TODO] **Can use a message broker as the Job Queue.** Applications that need higher performance and throughput than PostgreSQL can provide must be able to shift up to something more performant. For example, RabbitMQ is a very high-performance message broker written in Erlang.
8253
83-
# total 4.0K
84-
# drwxr-xr-x 2 root root 64 Sep 11 04:11 ./
85-
# drwxr-xr-x 1 root root 4.0K Sep 11 04:11 ../
86-
```
87-
Now you have a job log entry with the output of your command in the task results. :tada:
54+
* [TODO] **Can run (persistent) Task workers.** Some Tasks or Task environments (images) are anticipated as being needed continually. In such job environments, the Task workers can be made persistent services that listen to the Job queue for their own Jobs. (In essence, this allows a Task to be a complete sub-workflow being handled by its own Workflow Job queue workers, in which the Tasks are enabled to run inside the Job worker container as subprocesses.) -->
8855
89-
Similar results can be achieved with SQL directly, or with any other interface. Here's the same example run in the `psql` terminal inside the running cluster:
90-
```bash
91-
$ docker-compose exec postq bash
92-
$ psql $DATABASE_URL
93-
```
94-
```sql
95-
postq=# insert into postq.job (qname, status, workflow) values ('', 'queued', '{"tasks": [{"name": "a", "params": {"image": "debian:buster-slim", "command": "ls -laFh"}}]}') returning id;
96-
-[ RECORD 1 ]----------------------------
97-
id | 17d0a67c-98fb-4f84-913e-2f0532bc069f
98-
99-
INSERT 0 1
100-
postq=# select * from postq.job_log where id = '17d0a67c-98fb-4f84-913e-2f0532bc069f';
101-
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
102-
id | 17d0a67c-98fb-4f84-913e-2f0532bc069f
103-
qname |
104-
retries | 0
105-
queued | 2020-09-11 04:48:53.897556+00
106-
scheduled | 2020-09-11 04:48:53.897556+00
107-
initialized | 2020-09-11 04:48:54.40734+00
108-
logged | 2020-09-11 04:48:54.400779+00
109-
status | success
110-
workflow | {"tasks": [{"name": "a", "errors": "", "params": {"image": "debian:buster-slim", "command": "ls -laFh"}, "status": "success", "depends": [], "results": "total 4.0K\r\ndrwxr-xr-x 2 root root 64 Sep 11 04:48 ./\r\ndrwxr-xr-x 1 root root 4.0K Sep 11 04:48 ../\r\n"}]}
111-
data | {}
112-
```
56+
## Usage Examples
57+
58+
Here is an example in Python using the running postq container itself. The Python stack is [Databases](https://encode.io/databases), [SQL Alchemy Core](https://docs.sqlalchemy.org/en/13/core/), and data models written in [Pydantic](https://pydantic-docs.helpmanual.io/):
59+
60+
```bash
61+
$ docker-compose exec postq ipython
62+
```
63+
64+
```python
65+
# (Using the ipython shell, which allows async/await without an explicit event loop.)
66+
import os
67+
from databases import Database
68+
from postq import models, tables
69+
70+
database = Database(os.getenv('DATABASE_URL'))
71+
await database.connect()
72+
job = models.Job(
73+
tasks= {'a': {'params': {'image': 'debian:buster-slim', 'command': 'echo Hey!'}}}
74+
)
75+
record = await database.fetch_one(
76+
tables.Job.insert().returning(*tables.Job.columns), values=job.dict()
77+
)
78+
79+
# Then, after a few seconds...
80+
81+
joblog = models.Job(
82+
**await database.fetch_one(
83+
tables.JobLog.select().where(
84+
tables.JobLog.columns.id==record['id']
85+
).limit(1)
86+
)
87+
)
11388
114-
<!-- * [TODO] **Can use a message broker as the Job Queue.** Applications that need higher performance and throughput than PostgreSQL can provide must be able to shift up to something more performant. For example, RabbitMQ is a very high-performance message broker written in Erlang.
89+
print(joblog.tasks['a'].results)
11590
116-
* [TODO] **Can run (persistent) Task workers.** Some Tasks or Task environments (images) are anticipated as being needed continually. In such job environments, the Task workers can be made persistent services that listen to the Job queue for their own Jobs. (In essence, this allows a Task to be a complete sub-workflow being handled by its own Workflow Job queue workers, in which the Tasks are enabled to run inside the Job worker container as subprocesses.) -->
91+
# Hey!
92+
```
93+
Now you have a job log entry with the output of your command in the task results. :tada:
11794

0 commit comments

Comments
 (0)