Skip to content

Tests: CDA cassette migration (unit tier)#294

Open
jeandet wants to merge 53 commits into
SciQLop:mainfrom
jeandet:modernisation/pr5-cda-cassettes
Open

Tests: CDA cassette migration (unit tier)#294
jeandet wants to merge 53 commits into
SciQLop:mainfrom
jeandet:modernisation/pr5-cda-cassettes

Conversation

@jeandet
Copy link
Copy Markdown
Member

@jeandet jeandet commented May 11, 2026

Summary

Fifth PR of the modernisation effort (spec: docs/superpowers/specs/2026-05-08-speasy-modernisation-design.md, plan: docs/superpowers/plans/2026-05-10-pr5-cda-cassettes.md). Second per-provider cassette migration after AMDA.

Stacked on PR #293 (AMDA cassettes), which stacks on #292#291#290. Until each predecessor merges, this PR's diff includes them all.

What this PR does

  • Promotes tests/test_cdaweb.py (24 tests) from contract tier to unit tier with cassette-backed replay.
  • Records 18 cassettes against cdaweb.gsfc.nasa.gov, uploads them to https://sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/, updates tests/cassettes_manifest.json.
  • Adds tests/test_cdaweb_contract.py (4 daily-cron drift probes).
  • Adds tests/test_cdaweb_failures.py (1 failure-path unit test for CdaWebException propagation).
  • Adds SPEASY_CORE_HTTP_REWRITE_RULES to unit.yml env so cassette URLs match the placeholder-rewrite behaviour used by CI.

Production fix included

speasy/core/http.py:Response.url previously crashed under vcrpy replay because VCRHTTPResponse does not set a url attribute. One-line try/except AttributeError fallback preserves original behaviour in production and prevents the AttributeError in cassette-replay paths. Without this, 15 of 24 CDA tests fail under replay.

Tests dropped from unit tier (kept on contract)

To stay within the per-PR cassette-size budget, six heavyweight or non-cassette-friendly tests are skip-marked or trimmed. All have equivalent or surrogate coverage on the daily contract tier:

Test Reason Replacement
test_can_get_full_inventory_without_proxy ~800 MB inventory fetch test_full_inventory_fetch_finds_at_least_47000_parameters probe
test_get_products_with_percent_in_name Same (re-fetches inventory) Same probe
test_wrong_time_dependency_axis ~360 MB FEEPS CDF test_feeps_electron_intensity_returns_data probe
test_get_virtual_parameter_always_falls_back_to_api ~70 MB MMS FGM CDF n/a (rare path)
test_get_cluster_fgm_data Pre-existing upstream regression (CDA now returns data where the test expected None). Properly skip-marked; was silently failing CI since PR #290. Will re-evaluate with upstream
ConcurrentRequests.test_get_variable Threading + vcrpy don't mix cleanly n/a (would need restructuring)

Plus two ddt-driven cases pruned (MMS SCM products returning 76 MB / 380 MB chunks) and two time windows shrunk (semantics preserved).

Net effect

  • Unit tier: +43 (now 603 total) — most CDA tests now run on every PR, no network.
  • Contract tier: -42 +4 — only 4 small CDA drift probes hit upstream daily.
  • Compressed cassette storage: +42 MB (CDA blobs are mostly binary CDF, gzip ~2× vs AMDA's 8×).

Test plan

  • CI: unit.yml green — CDA tests replay from cassettes fetched at session start
  • CI: contract.yml (manually triggered) — 4 CDA probes pass against real upstream
  • Verify SPEASY_CORE_HTTP_REWRITE_RULES is set in CI job env (added in this PR)

jeandet and others added 30 commits May 8, 2026 15:55
Captures decisions on UV adoption, hatchling build backend, ruff/basedpyright
tooling, and three-tier test strategy (unit/contract/e2e). Sequences the work
as 17 small PRs ending with a mass reformat.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Per-task implementation plan for the first PR of the modernisation effort.
Covers pyproject.toml updates, uv.lock generation, CI/RTD switch to uv,
deletion of requirements*.txt / tox.ini / setup.cfg, and developer-doc
updates to drop the PYTHONPATH=. pattern.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Add SPEASY_CORE_HTTP_REWRITE_RULES env to PRs.yml non-3.10 pytest step
  (previously only on push/scheduled tests.yml — would have hit a
  non-existent server on PR builds for non-3.10 matrix entries).
- Add --with wheel to PRs.yml build step for parity with tests.yml.
- Scope flake8 to 'speasy tests' in both workflows (matches Makefile
  lint target). Avoids silently broadening lint to docs/conf.py and
  removes the .venv exclusion workaround that was needed when
  flake8 ran from repo root.
Without UV_PROJECT_ENVIRONMENT, uv creates .venv/ inside the project
and RTD's sphinx step (which calls $READTHEDOCS_VIRTUALENV_PATH/bin/python
directly) fails with 'python: not found'. Point uv at RTD's venv so the
install lands where the runner looks for it.
Classified via devtools/apply_test_markers.py:
- 12 files marked unit (pure-logic, no network)
- 19 files marked contract (real-server, will be migrated to cassettes in PRs 4-9)

Reclassifications during manual review:
- test_cache.py: contract -> unit (pure cache-logic, no network or speasy provider use)
- test_file_access.py: unit -> contract (uses HTTP via any_loc_open against live servers)

test_wasm.py was manually adjusted to place pytestmark at module level (the
file's body lives inside a try/except ImportError block, so the script's
naive insertion landed at wrong indentation).
…le path and sample across the inventory

The flat_inventories.generic_archive lookup uses module attribute access on
an instance, not a submodule import. Also, the first N parameters in the
flat inventory are clustered by mission, so a fixed time range can miss all
of them; sample across the full list instead.
- test_e2e_smoke.test_generic_archive: fail loudly if every candidate
  raises (was silently skipping, defeating the e2e tier's purpose).
- pyproject.toml: drop dead --ignore=setup.py from addopts and document
  the -m unit override semantics so future contributors don't trip on
  'pytest tests/test_amda.py' silently collecting nothing.
- contract.yml / e2e.yml: add concurrency groups so a manual run can't
  overlap with a cron run hammering the same upstream servers.
- CONTRIBUTING.rst: add a short note explaining the three test tiers
  and how to invoke each from local dev.
CI failure (blocking):
- unit.yml: 'make doctest' was using system Python (no sphinx in scope).
  Prefix with 'uv run' so make uses the project venv.

Reviewer findings:
- wasm_tests.yml: pytest tests/test_wasm.py without -m collected 0 tests
  under the new addopts default (test_wasm.py is contract-marked). Add
  -m '' to override.
- CLAUDE.md: examples like 'uv run pytest tests/test_amda.py' silently
  collected 0 tests under -m unit default. Replaced with tier-aware
  examples and added -m '' for the all-tests case.
- unit.yml: only sync --group docs on the coverage runner that needs it,
  not on every matrix entry.
nbsphinx requires the system pandoc binary (not the Python pandoc
wrapper that's in the docs dependency group). PR 1's tests.yml had
'sudo apt install -y texlive pandoc' before make doctest; my unit.yml
rewrite in PR 2 dropped that line, so the doctest job failed with
'nbsphinx.NotebookError: PandocMissing in examples/AMDA.ipynb'.
Restored as a separate apt step on the coverage runner.
The doctest step's examples reference all data providers (cdpp3dview
included) and live inventories. The job-level
SPEASY_CORE_DISABLED_PROVIDERS='cdpp3dview' makes the inventory tree's
cdpp3dview attribute missing during doctest, surfacing as
'types.SimpleNamespace object has no attribute cdpp3dview' and a chain
of NameErrors for variables defined in earlier doctest blocks.

Original tests.yml overrode SPEASY_CORE_DISABLED_PROVIDERS="" on the
combined pytest+doctest step, plus set HTTP_REWRITE_RULES (re-routes
the placeholder URL used in some examples to LPP's mirror) and
USER_AGENT. My PR 2 rewrite dropped the env block; restoring it on
the doctest step.
Pandas now prints its public name in type() repr ('pandas.DataFrame')
rather than the internal module path ('pandas.core.frame.DataFrame').
The user/numpy.rst doctest was written against the old form.
Surfaced now that uv.lock pins a recent pandas; pip-installed envs
were getting older pandas where the old form still applied.
jeandet added 2 commits May 10, 2026 16:15
Two PR 3 bugs found while implementing PR 4:

1. tests/conftest.py: vcr_config dict set record_mode="none",
   which unconditionally overrode the --record-mode CLI flag (per
   pytest-recording/_vcr.py:82-83) and made re-recording impossible.
   Drop the key — pytest-recording's session fixture already defaults
   record_mode to "none", and --record-mode now works as expected.

2. devtools/apply_test_markers.py: the idempotency guard only
   recognized single-form pytestmark ("pytestmark = pytest.mark.X"),
   not list-form ("pytestmark = [pytest.mark.X, pytest.mark.Y]").
   Re-running the script over a list-form file would re-add a
   single-form marker. Recognize both.
Cassettes will not live in the git repo. Instead they are hosted at
https://sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/ behind
HTTP Basic auth, identified by sha256 of their uncompressed content.

Components:

- tests/cassettes_manifest.json: maps each cassette path under
  tests/cassettes/ to its sha256. Empty in PR 3; populated by PR 4
  onwards when AMDA/CDA/etc. cassettes are recorded.

- tests/conftest.py: pytest_configure hook reads the manifest at
  session start, downloads any missing cassettes from the storage
  server (auth via SPEASY_CASSETTE_FETCH_USER/PASSWORD env vars or
  ~/.netrc), verifies the sha, decompresses to tests/cassettes/.
  Uses an XDG_CACHE_HOME-rooted local cache to avoid re-downloading
  across runs. New --no-cassette-fetch CLI flag opts out of fetching.

- devtools/publish_cassettes.py: maintainer-only staging tool. Walks
  tests/cassettes/, hashes each .yaml, gzips deterministically
  (mtime=0) into .publish_staging/<sha>.yaml.gz, and rewrites the
  manifest. Prints the rsync command for the maintainer to run; this
  script does not upload itself.

- .gitignore: ignore tests/cassettes/* (keep .gitkeep) and
  .publish_staging/.

- CI workflows (unit.yml, contract.yml, e2e.yml): inject
  SPEASY_CASSETTE_FETCH_USER and SPEASY_CASSETTE_FETCH_PASSWORD
  secrets so the conftest fetch hook works on GitHub runners.

- CONTRIBUTING.rst: document the new flow for both contributors
  (set env vars / netrc) and maintainers (record, publish, rsync,
  commit manifest).
if any("import pytest" in ln for ln in lines):
block = [f"\n{MARKER_LINE_PREFIX}{tier}\n", "\n"]
lines[insert_at:insert_at] = block
path.write_text("".join(lines))
Two infrastructure fixes that originally surfaced during PR 5 (CDA)
and PR 9 (CDPP3DView) cassette migrations, backported here where they
belong:

1. speasy/core/http.py Response.url: try/except AttributeError fallback
   for vcrpy's VCRHTTPResponse which delegates geturl() to http.client
   that reads self.url — an attribute the cassette response doesn't
   carry. Without this, every cassette-replayed response that triggers
   a debug-log of resp.url crashes.

2. tests/conftest.py _canonical_rewrite_rule_for_vcr autouse fixture:
   Pin speasy.core.url_utils._REWRITE_RULES_ (cached at module import
   time) to the recording-time policy for any vcr-marked test. Without
   this, a developer with a custom http_rewrite_rules entry in
   ~/.config/speasy/config.ini sees replay failures because the
   replay-side URL no longer matches the cassette.
@jeandet jeandet force-pushed the modernisation/pr5-cda-cassettes branch from eb0e99a to 9cfa61f Compare May 11, 2026 17:24
jeandet added 2 commits May 11, 2026 20:18
vcrpy's filter_headers and filter_query_parameters only scrub the
REQUEST side. Response headers like Set-Cookie (JSESSIONIDs from CSA,
session tokens from AMDA) and certain response bodies (AMDA's
auth.php returns a 32-char hex hash that may be derivable from
credentials) were being committed verbatim into cassettes.

Add before_record_response callback in vcr_config:
- Drops Set-Cookie response headers
- Replaces any 32-char hex body (matching AMDA auth.php response
  shape) with <SCRUBBED>

This guards future recordings. Existing cassettes are scrubbed in a
follow-up one-shot script (see PR description).
Sister script to the new before_record_response callback in
conftest. Scrubs the same patterns (Set-Cookie response headers,
32-char hex auth.php response bodies) from cassettes that were
recorded before the callback existed. Idempotent — safe to re-run.

Used once to retroactively clean the existing AMDA + CSA cassettes
on the modernisation/pr3-mocking-infra branch. Future recordings
are automatically scrubbed at record time by the conftest callback.
@jeandet jeandet force-pushed the modernisation/pr5-cda-cassettes branch from 9cfa61f to 93ff138 Compare May 11, 2026 18:24
jeandet added 7 commits May 11, 2026 20:39
The cassette hosting at sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/
is now public-read. Cassettes are content-addressed by sha256 — URLs
are unguessable for outsiders and any tampering is caught on download
via the existing hash verification in _fetch_cassette.

Practical benefits:
- Fork PRs can run the cassette-replaying unit tier (previously
  blocked: GitHub Actions doesn't expose repo secrets to fork PRs).
- New contributors need no credential setup to run the tests.
- CI workflows lose the SPEASY_CASSETTE_FETCH_USER/PASSWORD env
  injection (no longer needed).

Cassettes are still scrubbed (Set-Cookie response headers and AMDA
auth.php hash response bodies) by the before_record_response callback
in vcr_config, so no session/credential material reaches the cassette
content itself.
Generated by devtools/publish_cassettes.py from cassettes recorded
against live amda.irap.omp.eu. Each entry maps a relative path under
tests/cassettes/ to the sha256 of the cassette's uncompressed YAML
content. The .yaml.gz files for these hashes need to be uploaded by
a maintainer to:

    https://sciqlop.lpp.polytechnique.fr/data/speasy_cassettes/

via rsync from .publish_staging/. After upload, the conftest fetch
hook will download cassettes on demand for unit-tier test runs that
hit AMDA code paths.

Total: 90.6 MB uncompressed, 10.9 MB compressed across 22 cassettes.
The original recording captured a HEAD if-modified-since request to
spdf.gsfc.nasa.gov/pub/catalogs/all.xml because Speasy's diskcache
was already warm at recording time. On replay with a clean cache,
Speasy issues a full GET which has no matching cassette entry,
failing with CannotOverwriteExistingCassetteException.

Re-recorded with SPEASY_CACHE_PATH set to a fresh tempdir (the
pattern PR 5 established) so the cassette captures the GET path.
The 50 MB cassette content is the full AMDA observatory tree.
The scrub_existing_cassettes.py one-off removed Set-Cookie response
headers and replaced AMDA auth.php response bodies (32-char hex
session tokens) with <SCRUBBED>. Content-addressed cassette hashes
changed for the affected files; manifest updated to match.
@jeandet jeandet force-pushed the modernisation/pr5-cda-cassettes branch from 93ff138 to 13874fa Compare May 11, 2026 18:41
jeandet added 6 commits May 12, 2026 10:21
The earlier re-record only cleared SPEASY_CACHE_PATH but kept the
populated SPEASY_INDEX_PATH (Speasy's index path is separate from the
cache). With a populated index, Speasy's catalog-loader sees the
inventory as 'already known' and issues HEAD if-modified-since
revalidations instead of the full GETs that a fresh-state install
does. Result: cassette captured HEAD only; CI (fresh state) issues
GET and fails to match.

Re-recorded with both SPEASY_CACHE_PATH and SPEASY_INDEX_PATH set to
fresh tempdirs so the catalog GETs are captured.
- Skip 4 tests whose cassettes blow the 150 MB unit-tier budget (full
  inventory fetch, FEEPS electron intensity, MMS FGM virtual-parameter
  fallback, EQ_PP_MAM via inventory) - all keep equivalent live coverage in
  tests/test_cdaweb_contract.py.
- Drop the MMS2_SCM_SRVY_L2_SCSRVY case from both ddt-driven tests
  (MMS2 SCM survey returns ~76 MB per request because the API serves the
  day-aligned CDF chunk regardless of the requested time window).
- Drop the MMS1_SCM_BRST_L2_SCB case from both ddt-driven tests (10 min
  of burst SCM is a ~380 MB CDF).
- Shrink large API/FILE windows where it preserves the test's intent
  (PSP ISOIS, MMS FGM sanitised).
- Skip ConcurrentRequests.test_get_variable: thread-pool requests are not
  reliably intercepted by VCR, breaking replay.
- Skip test_get_cluster_fgm_data: upstream-data assumption no longer
  holds (CDA now returns Cluster C1 FGM data for the 2018-03/2016-03
  windows the test asserted None on).
Four daily-cron probes covering the live CDA paths whose cassettes
were dropped from the unit tier for size reasons (full inventory
fetch and the FEEPS electron-intensity request) plus two cheap
smoke checks (short THA fetch, inventory dataset presence).
Mocks speasy.data_providers.cda.http.get to return a 500 so the unit
tier verifies that a CDA server error surfaces as CdaWebException
rather than being silently swallowed.
@jeandet jeandet force-pushed the modernisation/pr5-cda-cassettes branch from 13874fa to f04e51f Compare May 12, 2026 10:08
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
8 Security Hotspots
E Security Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants