-
Notifications
You must be signed in to change notification settings - Fork 16
Description
In general it is not sufficient to do checks only when the optics is produced because:
- The problem could be dependent of the working point (Q, Q', coupling, cobined use of knobs)
- There is a makethin performed by the user (we have seen issues appearing only on thin model)
- In general, the workflow is quite involved between the load of the optics and the final model that is tracked. It is advisable to do the checks downstream.
Ideally I would do the tests at the very end of pymask, but:
-
Final orbit rematch brakes the knobs (should be modified)
-
We have only one sequence at that point (difficult to compute separations or luminosities). Maybe we could always setup both beams (2 mad instances)? The overhead would be significant (but we could use 2 cores)
-
In the presence of machine imperfections, we might have to open the tolerances quite a bit
If needed, we could have two sets of tests:
- A light one executed at each run
- A heavier one that we run when we make significant modifications (change of optics, changes in the pymask code)
- We could have both sequences setup
As much as possible we should make automatic tests, not relying on visual inspection by the user.
- How do we log the results of the tests? A json? --> we can edit the input dictionary and dump it at the end.
Automatic checks for knob behaviour
- Riccardo should already have something that can be reused?
- Check that knobs are indeed knobs (guido's approach), done before the makethin
- Do the checks in beam sigmas (easier to set thresholds)
When doing parametric scans it is good to produce some "color-plots" a la Sofia with final values from twiss (with bb on/off)
- Q, Q', coupling (but we should remember that these quantities are matched at end, so it is not a very robust sanity check)
- Luminosity is a very powerful one, but cannot be done at the very end since at that point we have only one beam.
- Other suggestions?
Errors should not pass silently:
- Match failing while running pymask (Q, Q', coupling, closed orbit)
- Use tar (last penalty)
- Circuit running out of strength (I believe that the routine of Frederik clips at the maximum value)
All the above focuses on linear optics:
- Do we need any test for non-linear properties?
- RDTs, Q vs Dp/p, would we know how to interpret/test it?
- Definitely we could compare B2 vs B4
- Footprints? (using pysixtrack/sixtracklib)
The beam-beam of B4 looks reasonable but it has been never quantitatively validated.
- Could we generate a test optics where the two beams are perfectly symmetric from a bb point of view?
- Would mean: Perfect anti-symmetry at all IPs, same beta* in IP2 and IP8, symmetric phase advances between the IPs. Anything else?
- Would it be enough to see the same FMA, RDTs, DA?
- Could we generate b3? Why is this reflect script so complicated?
- We could compare low-amplitude tune shift against formulas? Maybe also with a single bb interaction HO or LR to better understand.
In fact also the bb in B1 could profit from some more systematic checks.
Should we use also tracking in the tests (pysixtrack/sixtracklib)?
- Footprints
- Linear optics from tracking
- BB tune shift from tracking
A few points that should be addressed soon:
- Pysixtrack generation is untested (with errors logic needs to be adapted)
- Machine imperfection part could profit of some pythonization (Run 3 never simulated with errors)
- Coupling knob synthesis should be easy to port