DM-38718: Arq Queue for Slack#94
Conversation
b37f69d to
9cfbd35
Compare
jonathansick
left a comment
There was a problem hiding this comment.
- Be sure to add configurations for arq to the
configmodule - Be sure to initialize the redis dependency too for the FastAPI server in
src/semaphore/main.py. - You may want to also run Redis during tests. This can be done with tox-docker. check out https://github.com/lsst-sqre/times-square/blob/2ffb96e6de218d1bb61aaf84f2b6d97c4f19142e/tox.ini#L25-L47
Another note that this first commit kind of conflates two separate things. One is setting up arq, which is a redis-based queue. Another is setting up a redis client. Both use Redis, but they're actually separate things. We'll use the redis client to persist state.
05f52a9 to
3467491
Compare
cd3abbe to
8d5cf89
Compare
jonathansick
left a comment
There was a problem hiding this comment.
To solve the CI fails, I think you'll need to add the tox-docker plugin. You can do that by modified the tox step in .github/workflows/ci.yaml to have a tox step that includes a tox-plugins line like this:
- name: Run tox
uses: lsst-sqre/run-tox@v1
with:
python-version: ${{ matrix.python }}
tox-envs: 'lint,typing,py,coverage-report'
tox-plugins: "tox-docker"
10a3e4f to
bd599cf
Compare
|
Made the suggested changes to ci.yaml, however it continues to fail, citing the following error, then timing out.
|
b6bfefa to
2b380a1
Compare
e54dfcc to
2b3be64
Compare
jonathansick
left a comment
There was a problem hiding this comment.
Good start on this. So next you'll want to integrate this building block into the feature spec of sending Slack messages when there are new active messages for this environment.
One approach that you might take is to set up a cron task with Arq. The algorithm for that task is something like:
- Compute the active broadcasts (like the web endpoint does)
- Get the pre-exisiting state of active broadcasts.
- If there are new active broadcasts compared to the state, send those to Slack.
- Store the current active messages for the next cron iteration.
Probably the easiest way to store this state is in Redis with https://safir.lsst.io/user-guide/pydantic-redis.html
For that, you'll need to design a Pydantic class that describes the active broadcast messages. You can base that off the Pydantic models used by the web endpoint, like
2c6137a to
257ebb4
Compare
32e4029 to
d0f9512
Compare
88ff322 to
8830a23
Compare
b3ef91b to
1ebd2a3
Compare
|
Alright, so getting back into this, I found the documentation for mock slack webhooks, and decided to go about trying to implement it, with limited success. I believe I need to use the post_webhook function to log calls to the webhook, however I'm not certain how I'm supposed to get a request to put into it. |
1ebd2a3 to
fdd8c4f
Compare
8ac0839 to
bfbece6
Compare
Add redis and safir dependencies to the project Add redis to tests Co-authored-by: Jonathan Sick <jsick@lsst.org>
bfbece6 to
8d989f8
Compare
No description provided.