You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Updated README. Removed psql example, since currrently the postq process will crash if there is bad data in the postq.job table -- data validation during insert is an important part of the process until we have error trapping on select.
Copy file name to clipboardexpand all lines: README.md
+38-61
Original file line number
Diff line number
Diff line change
@@ -48,70 +48,47 @@ PostQ is a job queue system with
48
48
docker-compose up
49
49
```
50
50
The default docker-compose.yml cluster definition uses the docker executor (so tasks must define an image) with a maximum queue sleep time of 5 seconds and the default qname=''. Note that the default cluster doesn't expose any ports to the outside world, but you can for example shell into the running cluster (using a second terminal) and start pushing tasks into the queue. Or, the more common case is that your PostgreSQL instance is available inside your application cluster, so you can push jobs into postq directly from your application.
51
-
52
-
Here is an example in Python using [Databases](https://encode.io/databases), [SQL Alchemy Core](https://docs.sqlalchemy.org/en/13/core/), and data models written in [Pydantic](https://pydantic-docs.helpmanual.io/):
53
-
```bash
54
-
$ docker-compose exec postq ipython
55
-
```
56
-
```python
57
-
# (Using the ipython shell, which allows async/await without an explicit event loop.)
<!-- * [TODO] **Can use a message broker as the Job Queue.** Applications that need higher performance and throughput than PostgreSQL can provide must be able to shift up to something more performant. For example, RabbitMQ is a very high-performance message broker written in Erlang.
82
53
83
-
# total 4.0K
84
-
# drwxr-xr-x 2 root root 64 Sep 11 04:11 ./
85
-
# drwxr-xr-x 1 root root 4.0K Sep 11 04:11 ../
86
-
```
87
-
Now you have a job log entry with the output of your command in the task results. :tada:
54
+
* [TODO] **Can run (persistent) Task workers.** Some Tasks or Task environments (images) are anticipated as being needed continually. In such job environments, the Task workers can be made persistent services that listen to the Job queue for their own Jobs. (In essence, this allows a Task to be a complete sub-workflow being handled by its own Workflow Job queue workers, in which the Tasks are enabled to run inside the Job worker container as subprocesses.) -->
88
55
89
-
Similar results can be achieved with SQL directly, or with any other interface. Here's the same example run in the `psql` terminal inside the running cluster:
postq=# select * from postq.job_log where id = '17d0a67c-98fb-4f84-913e-2f0532bc069f';
101
-
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Here is an example in Python using the running postq container itself. The Python stack is [Databases](https://encode.io/databases), [SQL Alchemy Core](https://docs.sqlalchemy.org/en/13/core/), and data models written in [Pydantic](https://pydantic-docs.helpmanual.io/):
59
+
60
+
```bash
61
+
$ docker-compose exec postq ipython
62
+
```
63
+
64
+
```python
65
+
# (Using the ipython shell, which allows async/await without an explicit event loop.)
<!-- * [TODO] **Can use a message broker as the Job Queue.** Applications that need higher performance and throughput than PostgreSQL can provide must be able to shift up to something more performant. For example, RabbitMQ is a very high-performance message broker written in Erlang.
89
+
print(joblog.tasks['a'].results)
115
90
116
-
* [TODO] **Can run (persistent) Task workers.** Some Tasks or Task environments (images) are anticipated as being needed continually. In such job environments, the Task workers can be made persistent services that listen to the Job queue fortheir own Jobs. (In essence, this allows a Task to be a complete sub-workflow being handled by its own Workflow Job queue workers,in which the Tasks are enabled to run inside the Job worker container as subprocesses.) -->
91
+
# Hey!
92
+
```
93
+
Now you have a job log entry with the output of your command in the task results. :tada:
0 commit comments