Skip to content

Tests: CSA cassette migration (unit tier)#295

Open
jeandet wants to merge 56 commits into
SciQLop:mainfrom
jeandet:modernisation/pr6-csa-cassettes
Open

Tests: CSA cassette migration (unit tier)#295
jeandet wants to merge 56 commits into
SciQLop:mainfrom
jeandet:modernisation/pr6-csa-cassettes

Conversation

@jeandet
Copy link
Copy Markdown
Member

@jeandet jeandet commented May 11, 2026

Summary

Sixth PR of the modernisation effort. Third per-provider cassette migration after AMDA and CDA.

Plan: docs/superpowers/plans/2026-05-11-pr6-csa-cassettes.md.

Stacked on PR #294 (CDA cassettes), which stacks on #293#292#291#290. This PR's diff includes all predecessors until they merge in order.

What this PR does

  • Promotes tests/test_csa.py (1 test: full-inventory fetch from ESA Cluster Science Archive) from contract tier to unit tier with cassette-backed replay.
  • Records 1 cassette (~48 MB uncompressed XML inventory → ~2.1 MB gzipped, 22× ratio), uploads to https://sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/, updates the manifest.
  • Adds tests/test_csa_contract.py (2 daily-cron drift probes: inventory count ≥1932, known Cluster product still present).
  • Cleans up redundant manual os.environ manipulation in the existing CSA test — PR 3's autouse _disable_proxy_for_unit_tier fixture already handles this for the unit tier.

Net effect

  • Unit tier: +1 (now 604) — CSA inventory fetch runs on every PR, deterministic, no network.
  • Contract tier: -1 +2 — 2 small CSA drift probes hit ESA daily.
  • Compressed cassette storage: +2.1 MB.

Test plan

  • CI: unit.yml green — CSA test replays from cassette fetched at session start
  • CI: contract.yml (manually triggered) — 2 CSA probes pass against real upstream

jeandet and others added 30 commits May 8, 2026 15:55
Captures decisions on UV adoption, hatchling build backend, ruff/basedpyright
tooling, and three-tier test strategy (unit/contract/e2e). Sequences the work
as 17 small PRs ending with a mass reformat.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Per-task implementation plan for the first PR of the modernisation effort.
Covers pyproject.toml updates, uv.lock generation, CI/RTD switch to uv,
deletion of requirements*.txt / tox.ini / setup.cfg, and developer-doc
updates to drop the PYTHONPATH=. pattern.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Add SPEASY_CORE_HTTP_REWRITE_RULES env to PRs.yml non-3.10 pytest step
  (previously only on push/scheduled tests.yml — would have hit a
  non-existent server on PR builds for non-3.10 matrix entries).
- Add --with wheel to PRs.yml build step for parity with tests.yml.
- Scope flake8 to 'speasy tests' in both workflows (matches Makefile
  lint target). Avoids silently broadening lint to docs/conf.py and
  removes the .venv exclusion workaround that was needed when
  flake8 ran from repo root.
Without UV_PROJECT_ENVIRONMENT, uv creates .venv/ inside the project
and RTD's sphinx step (which calls $READTHEDOCS_VIRTUALENV_PATH/bin/python
directly) fails with 'python: not found'. Point uv at RTD's venv so the
install lands where the runner looks for it.
Classified via devtools/apply_test_markers.py:
- 12 files marked unit (pure-logic, no network)
- 19 files marked contract (real-server, will be migrated to cassettes in PRs 4-9)

Reclassifications during manual review:
- test_cache.py: contract -> unit (pure cache-logic, no network or speasy provider use)
- test_file_access.py: unit -> contract (uses HTTP via any_loc_open against live servers)

test_wasm.py was manually adjusted to place pytestmark at module level (the
file's body lives inside a try/except ImportError block, so the script's
naive insertion landed at wrong indentation).
…le path and sample across the inventory

The flat_inventories.generic_archive lookup uses module attribute access on
an instance, not a submodule import. Also, the first N parameters in the
flat inventory are clustered by mission, so a fixed time range can miss all
of them; sample across the full list instead.
- test_e2e_smoke.test_generic_archive: fail loudly if every candidate
  raises (was silently skipping, defeating the e2e tier's purpose).
- pyproject.toml: drop dead --ignore=setup.py from addopts and document
  the -m unit override semantics so future contributors don't trip on
  'pytest tests/test_amda.py' silently collecting nothing.
- contract.yml / e2e.yml: add concurrency groups so a manual run can't
  overlap with a cron run hammering the same upstream servers.
- CONTRIBUTING.rst: add a short note explaining the three test tiers
  and how to invoke each from local dev.
CI failure (blocking):
- unit.yml: 'make doctest' was using system Python (no sphinx in scope).
  Prefix with 'uv run' so make uses the project venv.

Reviewer findings:
- wasm_tests.yml: pytest tests/test_wasm.py without -m collected 0 tests
  under the new addopts default (test_wasm.py is contract-marked). Add
  -m '' to override.
- CLAUDE.md: examples like 'uv run pytest tests/test_amda.py' silently
  collected 0 tests under -m unit default. Replaced with tier-aware
  examples and added -m '' for the all-tests case.
- unit.yml: only sync --group docs on the coverage runner that needs it,
  not on every matrix entry.
nbsphinx requires the system pandoc binary (not the Python pandoc
wrapper that's in the docs dependency group). PR 1's tests.yml had
'sudo apt install -y texlive pandoc' before make doctest; my unit.yml
rewrite in PR 2 dropped that line, so the doctest job failed with
'nbsphinx.NotebookError: PandocMissing in examples/AMDA.ipynb'.
Restored as a separate apt step on the coverage runner.
The doctest step's examples reference all data providers (cdpp3dview
included) and live inventories. The job-level
SPEASY_CORE_DISABLED_PROVIDERS='cdpp3dview' makes the inventory tree's
cdpp3dview attribute missing during doctest, surfacing as
'types.SimpleNamespace object has no attribute cdpp3dview' and a chain
of NameErrors for variables defined in earlier doctest blocks.

Original tests.yml overrode SPEASY_CORE_DISABLED_PROVIDERS="" on the
combined pytest+doctest step, plus set HTTP_REWRITE_RULES (re-routes
the placeholder URL used in some examples to LPP's mirror) and
USER_AGENT. My PR 2 rewrite dropped the env block; restoring it on
the doctest step.
Pandas now prints its public name in type() repr ('pandas.DataFrame')
rather than the internal module path ('pandas.core.frame.DataFrame').
The user/numpy.rst doctest was written against the old form.
Surfaced now that uv.lock pins a recent pandas; pip-installed envs
were getting older pandas where the old form still applied.
Two infrastructure fixes that originally surfaced during PR 5 (CDA)
and PR 9 (CDPP3DView) cassette migrations, backported here where they
belong:

1. speasy/core/http.py Response.url: try/except AttributeError fallback
   for vcrpy's VCRHTTPResponse which delegates geturl() to http.client
   that reads self.url — an attribute the cassette response doesn't
   carry. Without this, every cassette-replayed response that triggers
   a debug-log of resp.url crashes.

2. tests/conftest.py _canonical_rewrite_rule_for_vcr autouse fixture:
   Pin speasy.core.url_utils._REWRITE_RULES_ (cached at module import
   time) to the recording-time policy for any vcr-marked test. Without
   this, a developer with a custom http_rewrite_rules entry in
   ~/.config/speasy/config.ini sees replay failures because the
   replay-side URL no longer matches the cassette.
@jeandet jeandet force-pushed the modernisation/pr6-csa-cassettes branch from 3fde685 to cd5f975 Compare May 11, 2026 17:24
jeandet added 2 commits May 11, 2026 20:18
vcrpy's filter_headers and filter_query_parameters only scrub the
REQUEST side. Response headers like Set-Cookie (JSESSIONIDs from CSA,
session tokens from AMDA) and certain response bodies (AMDA's
auth.php returns a 32-char hex hash that may be derivable from
credentials) were being committed verbatim into cassettes.

Add before_record_response callback in vcr_config:
- Drops Set-Cookie response headers
- Replaces any 32-char hex body (matching AMDA auth.php response
  shape) with <SCRUBBED>

This guards future recordings. Existing cassettes are scrubbed in a
follow-up one-shot script (see PR description).
Sister script to the new before_record_response callback in
conftest. Scrubs the same patterns (Set-Cookie response headers,
32-char hex auth.php response bodies) from cassettes that were
recorded before the callback existed. Idempotent — safe to re-run.

Used once to retroactively clean the existing AMDA + CSA cassettes
on the modernisation/pr3-mocking-infra branch. Future recordings
are automatically scrubbed at record time by the conftest callback.
@jeandet jeandet force-pushed the modernisation/pr6-csa-cassettes branch from 4887e88 to 6b3f45c Compare May 11, 2026 18:25
jeandet added 7 commits May 11, 2026 20:39
The cassette hosting at sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/
is now public-read. Cassettes are content-addressed by sha256 — URLs
are unguessable for outsiders and any tampering is caught on download
via the existing hash verification in _fetch_cassette.

Practical benefits:
- Fork PRs can run the cassette-replaying unit tier (previously
  blocked: GitHub Actions doesn't expose repo secrets to fork PRs).
- New contributors need no credential setup to run the tests.
- CI workflows lose the SPEASY_CASSETTE_FETCH_USER/PASSWORD env
  injection (no longer needed).

Cassettes are still scrubbed (Set-Cookie response headers and AMDA
auth.php hash response bodies) by the before_record_response callback
in vcr_config, so no session/credential material reaches the cassette
content itself.
Generated by devtools/publish_cassettes.py from cassettes recorded
against live amda.irap.omp.eu. Each entry maps a relative path under
tests/cassettes/ to the sha256 of the cassette's uncompressed YAML
content. The .yaml.gz files for these hashes need to be uploaded by
a maintainer to:

    https://sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/

via rsync from .publish_staging/. After upload, the conftest fetch
hook will download cassettes on demand for unit-tier test runs that
hit AMDA code paths.

Total: 90.6 MB uncompressed, 10.9 MB compressed across 22 cassettes.
The original recording captured a HEAD if-modified-since request to
spdf.gsfc.nasa.gov/pub/catalogs/all.xml because Speasy's diskcache
was already warm at recording time. On replay with a clean cache,
Speasy issues a full GET which has no matching cassette entry,
failing with CannotOverwriteExistingCassetteException.

Re-recorded with SPEASY_CACHE_PATH set to a fresh tempdir (the
pattern PR 5 established) so the cassette captures the GET path.
The 50 MB cassette content is the full AMDA observatory tree.
The scrub_existing_cassettes.py one-off removed Set-Cookie response
headers and replaced AMDA auth.php response bodies (32-char hex
session tokens) with <SCRUBBED>. Content-addressed cassette hashes
changed for the affected files; manifest updated to match.
@jeandet jeandet force-pushed the modernisation/pr6-csa-cassettes branch 2 times, most recently from 2158d2c to 1fa7be3 Compare May 11, 2026 18:56
jeandet added 9 commits May 12, 2026 10:21
The earlier re-record only cleared SPEASY_CACHE_PATH but kept the
populated SPEASY_INDEX_PATH (Speasy's index path is separate from the
cache). With a populated index, Speasy's catalog-loader sees the
inventory as 'already known' and issues HEAD if-modified-since
revalidations instead of the full GETs that a fresh-state install
does. Result: cassette captured HEAD only; CI (fresh state) issues
GET and fails to match.

Re-recorded with both SPEASY_CACHE_PATH and SPEASY_INDEX_PATH set to
fresh tempdirs so the catalog GETs are captured.
- Skip 4 tests whose cassettes blow the 150 MB unit-tier budget (full
  inventory fetch, FEEPS electron intensity, MMS FGM virtual-parameter
  fallback, EQ_PP_MAM via inventory) - all keep equivalent live coverage in
  tests/test_cdaweb_contract.py.
- Drop the MMS2_SCM_SRVY_L2_SCSRVY case from both ddt-driven tests
  (MMS2 SCM survey returns ~76 MB per request because the API serves the
  day-aligned CDF chunk regardless of the requested time window).
- Drop the MMS1_SCM_BRST_L2_SCB case from both ddt-driven tests (10 min
  of burst SCM is a ~380 MB CDF).
- Shrink large API/FILE windows where it preserves the test's intent
  (PSP ISOIS, MMS FGM sanitised).
- Skip ConcurrentRequests.test_get_variable: thread-pool requests are not
  reliably intercepted by VCR, breaking replay.
- Skip test_get_cluster_fgm_data: upstream-data assumption no longer
  holds (CDA now returns Cluster C1 FGM data for the 2018-03/2016-03
  windows the test asserted None on).
Four daily-cron probes covering the live CDA paths whose cassettes
were dropped from the unit tier for size reasons (full inventory
fetch and the FEEPS electron-intensity request) plus two cheap
smoke checks (short THA fetch, inventory dataset presence).
Mocks speasy.data_providers.cda.http.get to return a 500 so the unit
tier verifies that a CDA server error surfaces as CdaWebException
rather than being silently swallowed.
@jeandet jeandet force-pushed the modernisation/pr6-csa-cassettes branch from 1fa7be3 to 6516111 Compare May 12, 2026 10:10
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
8 Security Hotspots
E Security Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants