-
Notifications
You must be signed in to change notification settings - Fork 3
Minor improvements #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Compromise between giving enough time to get this potentially usefull optional pv and not slowing startup too much if it isnt available
Previously, if the pyAT simulation raised an exception we wouldnt update self._lattice_date but would still run the pv update callbacks on the old data. Now we skip this
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #45 +/- ##
==========================================
+ Coverage 93.84% 93.87% +0.03%
==========================================
Files 6 6
Lines 390 392 +2
==========================================
+ Hits 366 368 +2
Misses 24 24 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| ) | ||
| except Exception as e: | ||
| warn(at.AtWarning(e), stacklevel=1) | ||
| logging.warning( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also skips the cothread.Yield(). Is that appropriate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, if you imagine there are no further caputs or similar, doesn't this mean the Signal never gets sent and any pytac gets or wait_for_calculations timeout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be helpful to know the motivation for this change, as this feels like this code would all be changed after the threading rework later on, anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The motivation is:
- Don't post changes to PVs when the simulation hasnt been updated because
- Clients may think that their caput has caused the pvs to update but not change value, so the feedbacks for example may conclude it needs to increase its delta.
- It wastes time updating thousands of pvs - Improve logging, the warn() is only printed once in its current implementation, so if we keep getting this exception, after the first warning, the user will assume everything is running fine and they will see pvs updating, but they arent updating based on the users setpoints.
So I think it is quite an important change.
Skipping cothread.Yield() may cause an issue, its hard for me to know. We do potentially yield immediately after when we call _gather_one_sample() which waits on an item to be in _queue. Im not sure if cothread will always yield here or only yield if the queue is empty, asyncio would always yield.
Also it would make sense for the wait_for_calculations to timeout as the calculation has failed. We have recieved a caput, so we mark _up_to_date false, then we fail to implement the caput, so _up_to_date should remain false unless a later caput causes the simulation to succeed, in which case we will yield.
So I think its okay, this will all change when we switch to asyncio of course.
T-Nicholls
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed:
- This is better than the current behaviour, but doesn't fully solve the potential issues with exceptions in this method.
- Using a try/except/else rather than continue would avoid skipping the yield statement.
- It is a vanishingly infrequent edge-case where this could actually cause problems during normal operation.
- It will all likely be changed by the asyncio move and future threading changes anyway.
For these reasons, I don't really mind this being merged as is if you want these changes urgently; otherwise, we can wait until after the asyncio move and address this fully.
|
The main reason I wanted to merge was that the pytests were failing due to not closing the csv files, so I am happy to merge it. |
No description provided.