Skip to content

Refactor & Expand Baryon Model Tests + Add Benchmarks #1265

@nikosarcevic

Description

@nikosarcevic

The current baryonic-effects tests in pyccl/tests/ cover all three explicit baryon models (Schneider15, Baccoemu, vanDaalen19) and partially cover baryon-related nonlinear power-spectrum options (CosmicEmu MTIV, HMCode Mead2020 feedback).
However, the coverage is uneven, some tests are bundled together, and there are no structured performance/consistency benchmarks across models.
@elisachisari and I have discussed this and we propose a full cleanup + expansion of the baryon test suite.

Tasks

Audit existing baryon tests

  • Review:
    • test_baryons.py
    • test_barryons_vd19.py
    • test_baccoemu.py
    • test_cosmicemu.py
    • test_nonlin_camb_power.py (HMCode + AGN feedback)
    • test_cosmology.py (extra_parameters / halofit plumbing)
  • Identify:
    • duplicated logic
    • missing edge cases
    • inconsistent patterns across models

1### Restructure the test layout

Introduce “one model per test file” pattern:

  • test_baryons_base.py (from_name, cosmology plumbing, basically tests calls)
  • test_baryons_schneider15.py
  • test_baryons_baccoemu.py
  • test_baryons_vandaalen19.py
  • test_baryons_cosmicemu_mtiv.py
  • test_baryons_hmcode_mead2020.py

This improves readabiltiy and makes it easier to expand bayron support later when we add more models

Add a small benchmark suite

Create a benchmarks/baryons/ folder with scripts that:

  • generate P(k) ratios (baryons on/off) at multiple redshifts
  • evaluate models on a standard k–a grid
  • optionally compare performance vs reference literature curves
  • serve as manual validation for FKEM + baryons integrations
  • This will help detect regressions + inconsisetncies

Metadata

Metadata

Assignees

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions