Skip to content

Student Projects and Tasks (archived)

Sabine Kraml edited this page May 25, 2022 · 1 revision
  • MSSM EW-ino scan

    • to what extent can we improve coverage [of mixed scenarios] by combining likelihoods from different analyses?

    • need EMs for all the leading EW searches --> recasting

  • IDM scenario : add EMs for TChiZ topology from EW SUSY (chargino, neutralino, slepton) and Higgs->inv analyses; can we cover the low mass region with prompt decays?

  • Mono-X : how to include mono-X searches in SModelS?

    • Study efficiencies as a function of spin and production mode
    • Implement conservative case
  • Implement simplified pyhf likelihoods --> Gael and Timothee

    • create simplified pyhf JSON files with simplify tool, see also here
    • compare them against full ones, see ATL-PHYS-PUB-2021-038
    • add switch in SModelS to use either full of simplified JSON likelihoods
  • Likelihoods from exp+obs upper limits --> WW and Timothee

    • implement feature
    • use optimized truncation from Leo
  • Implement joint likelihoods in SModelS (add feature for analyses correlation matrix)

    • Get likelihoods from individual analyses as function of signal strength

      • how to build a combined likelihood function to be accessed after theory predictions?

      • introduce likelihood object

    • Numerical maximization of "heterogeneous" joint likelihoods

    • Copula functions may be used to model correlations between analyses

    • Generalize statistical procedure to deal with multiple signal strengths

    • Revise combination criteria: sqrts, experiment, constraints + info if hadronic or leptonic for topologies with tops; allow commbination of prompt and long-lived

  • Aggregation algorithms

    Storing and directly using the inverse of the covariance matrix speeds up the code by a factor 10!

    • Document aggregation algrorithm and its usage on the database wiki

    • Set small correlations to zero:

      1. do we gain in CPU performance?

      2. try to split off SRs with small correlations from large covariance matrix to get a smaller cov.M times product of approx. uncorrelated likelihoods

      3. up to what size may one neglect correlations before loosing in precision?

    • try different types of distance measures (min, max, mean) between sets of SRs

    • think about using Fisher information for aggregation

    • generalize from symmetric to asymmetric uncertainties (variable Gaussian approach, cf Lilith)

Clone this wiki locally