Conversation
08e9db7 to
34b5fa3
Compare
9223ca5 to
f22e9a4
Compare
022094c to
84cb2ba
Compare
| function newError(name: string) { | ||
| const factory = errorFactories.get(name) | ||
| if (factory) { | ||
| return factory() |
Check failure
Code scanning / CodeQL
Unvalidated dynamic method call High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 months ago
General approach: ensure that the dynamically selected callable (factory) is validated before invocation. Specifically, even though we already check truthiness, we should require that the looked-up value is a function. This prevents accidental invocation of non-function values if errorFactories is ever misconfigured, and satisfies CodeQL’s requirement for validating dynamic dispatch based on user-controlled keys.
Best concrete fix: in packages/core/src/rpc/serializeError/index.ts, inside newError(name: string), change the if (factory) { return factory() } guard so that it also verifies typeof factory === 'function'. If factory exists but is not a function, we should ignore it and fall back to the constructor logic already present, preserving existing behavior as much as possible while avoiding unsafe calls. No other behavior in the file needs to change, and we don’t need new imports or types.
Specific changes:
- File:
packages/core/src/rpc/serializeError/index.ts-
Around lines 30–35 (the
newErrorfunction), replace:const factory = errorFactories.get(name) if (factory) { return factory() }
with:
const factory = errorFactories.get(name) if (typeof factory === 'function') { return factory() }
-
This keeps functionality intact for valid factory functions while hardening against misconfigurations and satisfying the static analyzer.
| @@ -29,7 +29,7 @@ | ||
|
|
||
| function newError(name: string) { | ||
| const factory = errorFactories.get(name) | ||
| if (factory) { | ||
| if (typeof factory === 'function') { | ||
| return factory() | ||
| } | ||
| const ErrorConstructor = errorConstructors.get(name) ?? Error |
| if (name === 'AggregateError') { | ||
| return new AggregateError([]) | ||
| } | ||
| return new ErrorConstructor() |
Check failure
Code scanning / CodeQL
Unvalidated dynamic method call High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 months ago
In general, to fix unvalidated dynamic method/constructor calls, we should not blindly invoke the result of a dynamic lookup when the lookup key is tainted. Instead, we should validate that the resolved value is of the expected type (here, a constructible function) and fall back to a safe default if it is not. When the dynamic lookup comes from a controlled registry such as a Map, this amounts to type‑checking the value from the map before using it.
For this specific case, the best fix is to make newError resilient to unexpected entries in errorConstructors. We keep the current fallback behavior (use Error when the name is not in the map, and a special case for AggregateError), but we additionally verify that ErrorConstructor is actually a function that can be used with new. If it is missing or invalid, we safely fall back to the built‑in Error rather than calling new ErrorConstructor() and risking a runtime exception. This change should be done in packages/core/src/rpc/serializeError/index.ts, around lines 30–39 where newError is defined. No new imports are needed: the fix uses existing language features (typeof checks) and existing constructors (Error, AggregateError).
Concretely:
- Replace the simple assignment
const ErrorConstructor = errorConstructors.get(name) ?? Errorwith logic that:- Retrieves the candidate from the map.
- Checks that
typeof candidate === 'function'. - Falls back to
Errorif not.
- Leave the
AggregateErrorspecial case intact, so that behavior for that particular error type is unchanged. - Then call
new ErrorConstructor()as before, but nowErrorConstructoris guaranteed to be a function, making the dynamic call validated and safe.
| @@ -32,7 +32,9 @@ | ||
| if (factory) { | ||
| return factory() | ||
| } | ||
| const ErrorConstructor = errorConstructors.get(name) ?? Error | ||
| const candidate = errorConstructors.get(name) | ||
| const ErrorConstructor = | ||
| typeof candidate === 'function' ? candidate : Error | ||
| if (name === 'AggregateError') { | ||
| return new AggregateError([], 'AggregateError') | ||
| } |
5f51569 to
5331ddb
Compare
…extra/width params Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
rpcProps.sources was reading self.sources → self.sampleInfo → self.cellData. Every completed fetch changed sampleInfo, which changed rpcProps, which fired SettingsInvalidate → clearAllRpcData → cleared cellData → changed sampleInfo → changed rpcProps again — infinite loop. Fix: rpcProps.sources now calls getSources with renderingMode:'alleleCount' internally, which never reads sampleInfo. The client's sources view still reads sampleInfo for rendering (safe — not in rpcProps). Server-side, executeVariantCellData expands sources without HP to haplotypes after computing its own sampleInfo; sources from clustering already carry HP and pass through. Also guards nrow against 0 in phased mode before sampleInfo arrives. Documents the rpcProps loop trap in ARCHITECTURE.md. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…rces - Export makeHaplotypeSources from getSources.ts so both the client sources view and the server expansion use the same logic - Extract sourcesBase getter: layout+subtree-filtered, never haplotype-expanded, safe for rpcProps (no sampleInfo dependency). sourcesBase lives in its own .views() block so sources and rpcProps can both reference it via self - Simplify sources to: sourcesBase + conditional phased expansion. This also avoids tracking sampleInfo in non-phased mode (MobX short-circuits before the sampleInfo read), which was a latent over-tracking issue - Simplify rpcProps to a one-liner using self.sourcesBase with a short comment - Simplify server-side expansion in executeVariantCellData to use makeHaplotypeSources Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
sources no longer returns [] in phased mode before sampleInfo arrives (returns sourcesBase instead), so the sourcesVolatile fallback was dead code. The || 1 guard still handles the subtreeFilter-excludes-all case. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Delete shared/webglRpcUtils.ts: inline baseToAscii (one-liner) into mismatch/extract, move getEffectiveStrand into gap/extract and pairOrientationToNum into buildBaseFeatureData — each had exactly one caller and the file name was misleading - Split alignmentComponentUtils.ts (471 lines) into tooltipUtils.ts (tooltip data + format functions) and a trimmed alignmentComponentUtils (canvas coords, color palette, CIGAR_TYPE_LABELS) - Extract buildChainFeatureData into shared/buildBaseFeatureData.ts, removing the inline anonymous builder and two `as` casts from executeRenderChainData; mirrors how pileup passes a named reference Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The definition was never imported — the live ReducedFeature used by AlignmentsFeatureDetail is defined in getSAFeatures.ts and has a completely different shape (mate/seqLength/syntenyId vs name/flags/SA). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a comment flagging that sourcesWithoutLayout reads sampleInfo (cellData-derived) and must never appear in rpcProps. Also adds a one-line hierarchy note to sourcesBase so the three-getter relationship is visible at a glance. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Remove unused activeFilters() method and migrateVariantSettings wrapper
- Replace getGenotypeMapForFeature's unknown cast with CellDataResult typing
and proper mode narrowing; remove local GenotypeMap interface
- colorByAutorun: replace early returns with nested ifs per project style
- computeVariantCells: fix mixed ! assertion + || fallback on feature.get('type')
- executeVariantCellData: remove redundant as ArrayBuffer[] cast
- VariantComponent/VariantMatrixComponent: carry featureInfo/featureData on the
getFeatureUnderMouse result so getEnrichedFeature avoids a redundant search
- useVariantVirtualScroll: remove early returns from wheel handler
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Mirrors plugins/alignments layering: shaders live alongside the display (not under components/), and per-display constants/types move into shared/types.ts. Renderer-level interfaces (MultiSyntenyBackend, MultiSyntenyRenderState, MultiSyntenyCanvasRenderOpts) stay in components/ as rendererTypes.ts. Sets up the layering needed for per-feature folders: shared/ and any future features/ can import shaders/ and shared/types.ts without reaching back into components/.
Mirrors plugins/alignments features/X/ layout. Each feature now owns a folder with packGpu.ts + drawCanvas.ts: features/coverage/ packCoverageForGpu, drawCoverageCanvas features/snpCoverage/ packSnpCoverageForGpu, drawSnpCoverageCanvas features/indicator/ packIndicatorsForGpu, drawIndicatorCanvas multiSyntenyGpuData.ts retains the fill (block-instance) packer, the shared computeBlockRenderParams helper, and the BlockGeometryData type. The renderer's renderBlocks loop calls the per-feature drawCanvas helpers instead of inlining drawCoverageBins / drawSnpSegments / drawIndicators.
Mirrors plugins/alignments features/X/{extract,hitTest}.ts layout for
mismatch / insertion / deletion. Each feature owns its emit functions
for both the GPU fill InstanceBuilder and the hit-test items array.
shared/instanceWriter.ts addInstance + buildColorArrays
shared/extractCigarFeatures.ts buildGpuOpsVisitor / buildHitTestOpsVisitor
shared/hitTestTypes.ts CigarHitResult
features/mismatch/{extract,hitTest}.ts
features/insertion/{extract,hitTest}.ts
features/deletion/{extract,hitTest}.ts
The CIGAR/CS visitor builders dispatch each op type into a per-feature
emit. multiSyntenyGpuData.ts loses its inline visitor; hitTesting.ts
loses its inline collectOpsItems body.
Final colocation step. The fill (block-instance) feature now owns its own folder, mirroring plugins/alignments features/X/ template: features/fill/packGpu.ts prepareBlockGeometry, BlockGeometryData features/fill/drawCanvas.ts drawFillCanvas (block-instance loop) features/fill/packGpu.test.ts (renamed from multiSyntenyGpuData.test) shared/blockRenderParams.ts computeBlockRenderParams (renderer-shared) multiSyntenyGpuData.ts is gone; its remaining content moved to the fill feature folder or shared/. Drops dead globalMaxDepth loop and unused FILL_STRIDE alias from Canvas2DMultiSyntenyRenderer.
Moves multiSyntenyColorUtils.ts (used by features/fill, components/ Canvas2DMultiSyntenyRenderer, and model.ts) out of components/ into shared/. Closes the last layering hole — features/X/ no longer reaches into components/. Also picks up eslint --fix import ordering across the renamed modules.
Cohesion pass: - shaders/ → shaders/slang/. Matches alignments precisely (the slang subfolder makes room for non-slang shader assets and is the existing alignments convention). - hitTesting.ts → hitTestPipeline.ts. Aligns the orchestrator-file name with alignments' LinearAlignmentsDisplay/components/hitTestPipeline.ts. - Inline fillSyntenyUniforms back into GpuMultiSyntenyRenderer.ts and delete multiSyntenyGpuUtils.ts. Matches alignments where fillFrameUniforms / fillArcUniforms / writePaletteToUbo all live next to the renderer that uses them. - Split features/fill/packGpu.test.ts. computeBlockRenderParams tests move to shared/blockRenderParams.test.ts; packCoverageForGpu tests move to features/coverage/packGpu.test.ts. Each test file now matches the unit it tests.
The MultiSyntenyBackend interface now matches AlignmentsBackend's shape: sync(sources): void renderBlocks(state): boolean dispose(): void Replaces the four uploadXForBlock methods + clearAllBlocks with a single sync(sources) entry point that takes the full RPC payload + settings. Each backend internally iterates rpcDataMap and calls per-feature pack + upload primitives. Each feature now owns its GPU pass + uploader (matches the alignments file template): features/fill/uploadGpu.ts FILL_PASS, uploadFillToGpu features/coverage/uploadGpu.ts COVERAGE_PASS, uploadCoverageToGpu features/snpCoverage/uploadGpu.ts SNP_PASS, uploadSnpCoverageToGpu features/indicator/uploadGpu.ts INDICATORS_PASS, uploadIndicatorsToGpu The Canvas2D backend builds a regions Map via buildSyntenyRegionMap, mirroring alignments' buildAlignmentsRegionMap. The GPU backend's sync calls the per-feature uploadX free functions (analogous to alignments' GpuAlignmentsRenderer.sync). Other cleanups: - Drop dead regionStart field from upload payloads (set, never read) - Move SyntenyColorPalette from model.ts to shared/types.ts - Drop unused CigarHitResult re-export from hitTestPipeline.ts - Add named color constants ROW_BG_ALT, ROW_DIVIDER, LABEL_TEXT - renderBlocks now returns boolean (matches AlignmentsBackend)
- Remove `| null` from BedData.score (library never produces null) - Add BedData interface to give proper types to parsed BED fields, replacing the previous Record<string, unknown> fallback - Drop UcscTranscriptOutput interface, let return type be inferred - Fix isBedMethylFeature: `+(col6 || 0) === start` gave a false positive when start=0 and col6 was absent; now checks col6/col7 are defined before comparing - Add tests for the start=0 short-line edge case Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…idation
Cache forward + reverse assemblyNameMap once on the BaseGfaTabixAdapter
instance. resolveTabixRefName uses the cached reverse map for O(1) lookup
instead of iterating Object.entries(map) per region; remapGenome and
getChromSizes hit the cached forward map instead of getConf per call;
annotateFeaturesWithBubbleCs receives the prebuilt reverse map directly so
it no longer rebuilds per query (HPRC chr20 ~90 paths × 1 build/query).
Extract partitionByRef helper that walks allSegs once and returns
{refSegments, otherSegments, refByOrd}; replaces the inline three-step
partition in getMultiPairFeaturesFromSegments.
Remove the swallowing try/catch around bubbles header parse — if the file
opened it should parse successfully, and silent fallback hid real errors.
Consolidate bubbleAnnotator.ts + bubbleCs.ts into a single bubbleOverlay.ts
module with the BubbleSite-per-locus runtime model documented at the type
declaration. Rename the corresponding tests and document the X-CIGAR
contract in GRAPH_ARCHITECTURE.md "Fragile boundaries". Add an explicit
"BaseGfaTabixAdapter abstraction" subsection pinning the single-shard /
sharded subclass split so future agents do not collapse it. ADR-013
captures the bubble-shape per-pair on-disk decision.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ouping Promote Phase 4 snarls index to top of backlog: it's the gating item for the C3 chr20 path-symmetry claim (chr20 needs snarl-aware expansion, not strict byte-isomorphism) and unlocks snarl-boundary context expansion, zoom-to-snarl UX, and the snarl-collapse coarsening tier. Add backlog item: bubble-row regrouping at preprocess time. The runtime groups per-pair rows into per-site BubbleSite records on every region query; the rust tool already has the per-site shape internally before splitting on emit, so flipping the emit step is ~20 LOC and deletes ~50 LOC of bubbleOverlay.ts. Pre-publication is the cheap moment for the schema break per the format-spec compatibility policy. Demote Phase 5 CI provisioning to mechanical / non-gating; the Jest suite has shipped, only the vg/odgi/chunkix CI install remains. Refresh "What exists today": current line numbers, new BaseGfaTabixAdapter abstraction subsection (single-shard vs sharded subclasses, cached forward+reverse assemblyNameMap), updated bubble pipeline pointers. Drop the "megabase-scale viewing requires coarsening" surfaced issue (now subsumed by backlog item 2). Replace with the loadBinaryIndex eager-load ceiling that's the next scaling bound past chr20. Update S3 chr20 file list to current state and flag that the bubble file upload waits on backlog item 3 to avoid double re-upload. Log the GfaTabixAdapter caching + partition + try/catch refactor in GRAPH_COMPLETED.md. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Drop odgi/polychain/tile pipeline entirely; zero odgi dependency - Replace segments.bin with synteny.bed.gz (reference-coord tabix) - Replace edges.bin with edges.spatial.bed.gz (bidirectional, bubble-ref-span keyed) - GetSubgraph: pos.bed.gz + synteny.bed.gz + per-haplotype pos.bed.gz + edges + faidx (93 range requests vs 784 with old approach) - Large-region GraphGenomeView: synteny rectangles in reference coordinate space - Concrete test fixtures (linear/bubble/inversion/insertion/multipath GFAs) - Per-phase test specs with exact oracles and assertions - Format reference appendix separated from plan narrative Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Synthetic GFA fixtures must be committed before Phase 2 Rust tests - volvox_chr1_0-50k fixture built and committed at end of Phase 2, not before - Clarify GetSubgraph: 93 tabix range requests + faidx per ordinal (profile step 5) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Each phase now has a concrete chr20 validation step (Node.js scripts + puppeteer) that must pass before moving to the next phase. Small fixtures and volvox prove correctness in theory; chr20 at HPRC scale proves it in production. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This PR proposes a large change to help transition jbrowse 2 to a webgpu powered rendering system (with additional webgl and html5 canvas fallbacks)
See here for a couple demos
https://docs.google.com/document/d/11YIM9xtY4xBqQQjzvTyZ3sERMmh9sfclt14KwupS8dg/edit?usp=sharing
Background
JBrowse (both 1 and 2) have been "optimized for side scrolling". The "static blocks" based methodology reflects this: tiles of data are prerendered side by side. Unfortuantely, in order to get true smooth zooming, we need to re-think our rendering pipeline, and target gpu based rendering. The reason gpu is needed is that we can provide all the "coordinate data" to the GPU renderer, and the GPU just does transformations on zoom level changes or side scroll using fast minimal processing. Contrast this with html5 canvas, where on zoom level change, we have to completely, on the CPU, re-draw everything with javascript
Video
This video demo should be compelling to anyone who has ever zoomed in or out in jbrowse. It simply demonstrates smooth zooming. Compare against master where the entire screen blanks and re-renders at each zoom level, and the smooth zoom feels much more natural
out.mp4
Context
This was motivated by work with Pratik on the lorax project https://lorax.ucsc.edu/ this project is deeply webgl driven and is interested in integrating with jbrowse (meeting with Pratik later this week)