Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a discovery-aware release-matching path and helpers, refactors the existing strict matcher into discovery-capable functions with resolution context, and switches three internal cross-seed call sites to use the discovery-enabled matcher. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
internal/services/crossseed/matching.go (1)
489-501: Prefix-based group matching is intentionally permissive.The
HasPrefixcheck (line 500) allows groups like"FLUX"to match"FLUXUS". Given that this is specifically for discovery filtering (with downstream file verification), this is an acceptable tradeoff. However, document this behavior if not already covered elsewhere.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching.go` around lines 489 - 501, The function discoveryReleaseGroupCompatible uses prefix matching (strings.HasPrefix) between normalized source and candidate group values, which intentionally permits cases like "FLUX" matching "FLUXUS"; update the codebase documentation/comments near discoveryReleaseGroupCompatible to explicitly state this permissive behavior is by design for discovery filtering and that downstream file verification will perform stricter checks, and mention the specific use of strings.HasPrefix so future readers understand the tradeoff.internal/services/crossseed/matching_discovery_test.go (1)
15-156: Prefer a table-driven layout for these matcher cases.The repeated
Servicesetup and near-identicalrls.Releasefixtures make this harder to extend as more discovery exceptions are added. Collapsing these into one or two table-driven tests would keep new cases reviewable and reduce fixture drift.As per coding guidelines "Prefer table-driven test cases in Go tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching_discovery_test.go` around lines 15 - 156, Convert these near-duplicate tests into one or two table-driven tests: create a slice of test cases (with fields: name, source *rls.Release, candidate *rls.Release, expectDiscovery bool, optional expectStrict bool) and loop with t.Run for each case; instantiate Service once (Service{stringNormalizer: stringutils.NewDefaultNormalizer()}) outside the loop and reuse it; for each case call svc.releasesMatchDiscovery(source, candidate, false) and assert against expectDiscovery (and optionally call svc.releasesMatch when expectStrict is provided) so you remove repeated fixtures and subtests while preserving all scenarios referenced by TestReleasesMatchDiscovery_* and TestReleasesMatchDiscovery_KeepsSourceAndVersionBoundaries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/services/crossseed/matching.go`:
- Around line 459-460: The code currently calls
s.stringNormalizer.Normalize(...) for sourceRes and candidateRes which can panic
when s.stringNormalizer is nil; instead use the local fallback variable
normalizer (created earlier as the non-nil normalizer) and call
normalizer.Normalize(source.Resolution) and
normalizer.Normalize(candidate.Resolution) so the fallback to
stringutils.DefaultNormalizer is respected and prevents a nil pointer
dereference.
---
Nitpick comments:
In `@internal/services/crossseed/matching_discovery_test.go`:
- Around line 15-156: Convert these near-duplicate tests into one or two
table-driven tests: create a slice of test cases (with fields: name, source
*rls.Release, candidate *rls.Release, expectDiscovery bool, optional
expectStrict bool) and loop with t.Run for each case; instantiate Service once
(Service{stringNormalizer: stringutils.NewDefaultNormalizer()}) outside the loop
and reuse it; for each case call svc.releasesMatchDiscovery(source, candidate,
false) and assert against expectDiscovery (and optionally call svc.releasesMatch
when expectStrict is provided) so you remove repeated fixtures and subtests
while preserving all scenarios referenced by TestReleasesMatchDiscovery_* and
TestReleasesMatchDiscovery_KeepsSourceAndVersionBoundaries.
In `@internal/services/crossseed/matching.go`:
- Around line 489-501: The function discoveryReleaseGroupCompatible uses prefix
matching (strings.HasPrefix) between normalized source and candidate group
values, which intentionally permits cases like "FLUX" matching "FLUXUS"; update
the codebase documentation/comments near discoveryReleaseGroupCompatible to
explicitly state this permissive behavior is by design for discovery filtering
and that downstream file verification will perform stricter checks, and mention
the specific use of strings.HasPrefix so future readers understand the tradeoff.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: d417ffdf-39c4-41e9-bcd6-9622f9a9db1f
📒 Files selected for processing (3)
internal/services/crossseed/matching.gointernal/services/crossseed/matching_discovery_test.gointernal/services/crossseed/service.go
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
internal/services/crossseed/matching.go (1)
129-145:⚠️ Potential issue | 🔴 CriticalReuse the normalizer fallback before any metadata normalization.
releasesMatchDiscovery()can still panic whens.stringNormalizeris nil and both releases haveArtist, because Lines 143-144 dereferences.stringNormalizerbeforediscoveryMetadataMatch()applies its fallback.🐛 Proposed fix
func (s *Service) releasesMatchWithDiscovery(source, candidate *rls.Release, findIndividualEpisodes bool, allowDiscoveryRelaxations bool) bool { + normalizer := s.stringNormalizer + if normalizer == nil { + normalizer = stringutils.DefaultNormalizer + } + if source == candidate { return true } @@ if source.Artist != "" && candidate.Artist != "" { - sourceArtist := s.stringNormalizer.Normalize(source.Artist) - candidateArtist := s.stringNormalizer.Normalize(candidate.Artist) + sourceArtist := normalizer.Normalize(source.Artist) + candidateArtist := normalizer.Normalize(candidate.Artist) if sourceArtist != candidateArtist { return false } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching.go` around lines 129 - 145, The code in releasesMatchWithDiscovery can nil-pointer panic because it calls s.stringNormalizer.Normalize on source.Artist/candidate.Artist before discoveryMetadataMatch applies its fallback; change the flow to reuse the normalizer fallback by checking for s.stringNormalizer == nil (or delegating to discoveryMetadataMatch) before calling Normalize, i.e., obtain normalized artist strings via a helper that uses s.stringNormalizer if non-nil and falls back to the existing discoveryMetadataMatch/alternative normalization when nil, then compare those normalized values instead of directly calling s.stringNormalizer.Normalize.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/services/crossseed/matching.go`:
- Around line 416-433: The current loop in normalizeDiscoveryTitle() removes
region tokens (e.g., "us", "it", "de") regardless of position, collapsing
legitimate titles; change the logic in the tokens processing loop (the tokens
slice -> filtered variable) so region-noise is only skipped when the token is a
trailing suffix (i.e., only continue/ignore when the normalized token is in the
region list AND its index == len(tokens)-1); otherwise treat the token normally
(apply the ignorable character check and append to filtered). This preserves
internal tokens like "IT" in "The IT Crowd" while still stripping true trailing
region codes.
- Around line 209-210: The early return when allowDiscoveryRelaxations is true
causes discoveryMetadataMatch(s, source, candidate) to bypass subsequent
compatibility checks (collection, codec/HDR, language, disc/platform/arch,
variant). Instead of returning early, call discoveryMetadataMatch(s, source,
candidate) to get its boolean result and if it fails return false, but if it
passes continue and run the remaining checks (collection check, codec/HDR check,
language check, disc/platform/arch check, variant check) against
source/candidate; alternatively fold those specific checks into
discoveryMetadataMatch so they are honored in discovery mode as well. Ensure you
update the code paths that reference allowDiscoveryRelaxations,
discoveryMetadataMatch, s, source, and candidate so discovery no longer bypasses
those compatibility checks.
---
Duplicate comments:
In `@internal/services/crossseed/matching.go`:
- Around line 129-145: The code in releasesMatchWithDiscovery can nil-pointer
panic because it calls s.stringNormalizer.Normalize on
source.Artist/candidate.Artist before discoveryMetadataMatch applies its
fallback; change the flow to reuse the normalizer fallback by checking for
s.stringNormalizer == nil (or delegating to discoveryMetadataMatch) before
calling Normalize, i.e., obtain normalized artist strings via a helper that uses
s.stringNormalizer if non-nil and falls back to the existing
discoveryMetadataMatch/alternative normalization when nil, then compare those
normalized values instead of directly calling s.stringNormalizer.Normalize.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: ec93faf0-654f-40ed-91d0-f1bcd5ae10f8
📒 Files selected for processing (2)
internal/services/crossseed/matching.gointernal/services/crossseed/matching_discovery_test.go
There was a problem hiding this comment.
🧹 Nitpick comments (2)
internal/services/crossseed/matching.go (1)
264-284: Consider extracting shared SD fallback logic.The SD resolution fallback logic is duplicated between the non-discovery path (lines 269-283) and
discoveryMetadataMatch(lines 471-483). A small helper would reduce duplication.♻️ Proposed helper extraction
+// resolutionsCompatible checks if two resolutions are compatible for matching. +// Empty resolution is allowed to match known SD resolutions (480p, 576p, SD). +func resolutionsCompatible(sourceRes, candidateRes string) bool { + if sourceRes == candidateRes { + return true + } + isKnownSD := func(res string) bool { + switch normalizeVariant(res) { + case "480P", "576P", "SD": + return true + default: + return false + } + } + return (sourceRes == "" && isKnownSD(candidateRes)) || (candidateRes == "" && isKnownSD(sourceRes)) +}Then replace both inline checks with
if !resolutionsCompatible(sourceRes, candidateRes) { return false }.Based on learnings: "Prefer minimal, reviewable diffs in high-churn areas" - this is a nice-to-have for a future cleanup.
Also applies to: 468-484
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching.go` around lines 264 - 284, Duplicate SD-resolution fallback logic exists in matching.go (the non-discovery path and discoveryMetadataMatch); extract it into a small helper like resolutionsCompatible(sourceRes, candidateRes) that calls normalizer.Normalize on inputs, uses normalizeVariant to detect "480P"/"576P"/"SD", and returns true when either exact match or the allowed empty-SD fallback applies; then replace the inline block in the original match code and the discoveryMetadataMatch checks with if !resolutionsCompatible(sourceRes, candidateRes) { return false } to remove duplication and keep behavior identical.internal/services/crossseed/matching_discovery_test.go (1)
134-179: Consider consolidating similar subtests into table-driven cases.The source/version mismatch subtests and the shared compatibility subtests follow similar patterns. Table-driven tests would reduce boilerplate while maintaining clarity.
♻️ Example table-driven refactor for source/version boundaries
func TestReleasesMatchDiscovery_KeepsSourceAndVersionBoundaries(t *testing.T) { t.Parallel() svc := &Service{stringNormalizer: stringutils.NewDefaultNormalizer()} - t.Run("source mismatch", func(t *testing.T) { - t.Parallel() - - source := &rls.Release{ - Title: "Movie", - Year: 2025, - Source: "WEB-DL", - Resolution: "1080p", - } - candidate := &rls.Release{ - Title: "Movie", - Year: 2025, - Source: "BluRay", - Resolution: "1080p", - } - - require.False(t, svc.releasesMatchDiscovery(source, candidate, false)) - }) - - t.Run("version mismatch", func(t *testing.T) { - t.Parallel() - - source := &rls.Release{ - Title: "Show", - Series: 1, - Episode: 4, - Source: "WEB-DL", - Resolution: "1080p", - Version: "v2", - } - candidate := &rls.Release{ - Title: "Show", - Series: 1, - Episode: 4, - Source: "WEB-DL", - Resolution: "1080p", - } - - require.False(t, svc.releasesMatchDiscovery(source, candidate, false)) - }) + cases := []struct { + name string + source *rls.Release + candidate *rls.Release + }{ + { + name: "source mismatch", + source: &rls.Release{Title: "Movie", Year: 2025, Source: "WEB-DL", Resolution: "1080p"}, + candidate: &rls.Release{Title: "Movie", Year: 2025, Source: "BluRay", Resolution: "1080p"}, + }, + { + name: "version mismatch", + source: &rls.Release{Title: "Show", Series: 1, Episode: 4, Source: "WEB-DL", Resolution: "1080p", Version: "v2"}, + candidate: &rls.Release{Title: "Show", Series: 1, Episode: 4, Source: "WEB-DL", Resolution: "1080p"}, + }, + } + + for _, tc := range cases { + t.Run(tc.name, func(t *testing.T) { + t.Parallel() + require.False(t, svc.releasesMatchDiscovery(tc.source, tc.candidate, false)) + }) + } }As per coding guidelines: "Prefer table-driven test cases in Go tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching_discovery_test.go` around lines 134 - 179, The test TestReleasesMatchDiscovery_KeepsSourceAndVersionBoundaries contains repetitive subtests ("source mismatch" and "version mismatch"); convert it into a table-driven test to reduce duplication by creating a slice of cases (with name, source *rls.Release, candidate *rls.Release, expect bool) and iterate with t.Run for each case, still calling svc.releasesMatchDiscovery; keep the same assertions (require.False for mismatches) but derive expected value from the case to allow adding the shared compatibility subtests into the same table-driven structure and remove the duplicated t.Parallel setup inside each inline subtest.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/services/crossseed/matching_discovery_test.go`:
- Around line 134-179: The test
TestReleasesMatchDiscovery_KeepsSourceAndVersionBoundaries contains repetitive
subtests ("source mismatch" and "version mismatch"); convert it into a
table-driven test to reduce duplication by creating a slice of cases (with name,
source *rls.Release, candidate *rls.Release, expect bool) and iterate with t.Run
for each case, still calling svc.releasesMatchDiscovery; keep the same
assertions (require.False for mismatches) but derive expected value from the
case to allow adding the shared compatibility subtests into the same
table-driven structure and remove the duplicated t.Parallel setup inside each
inline subtest.
In `@internal/services/crossseed/matching.go`:
- Around line 264-284: Duplicate SD-resolution fallback logic exists in
matching.go (the non-discovery path and discoveryMetadataMatch); extract it into
a small helper like resolutionsCompatible(sourceRes, candidateRes) that calls
normalizer.Normalize on inputs, uses normalizeVariant to detect
"480P"/"576P"/"SD", and returns true when either exact match or the allowed
empty-SD fallback applies; then replace the inline block in the original match
code and the discoveryMetadataMatch checks with if
!resolutionsCompatible(sourceRes, candidateRes) { return false } to remove
duplication and keep behavior identical.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2a8f59fd-c27c-4f3e-869f-0647b61bb41c
📒 Files selected for processing (2)
internal/services/crossseed/matching.gointernal/services/crossseed/matching_discovery_test.go
This comment was marked as resolved.
This comment was marked as resolved.
|
HDR marker from sites is also tripping up. Not sure how far you want to go with this, and whether ignoring HDR for BDMV content is suitable, falling back onto other markers/size, or whether dropping the HDR marker is better suited in something like the pooling pr. |
Feel free to commit stuff here @Audionut - unless its all covered by your other PRs 👍 |
Decide on it in the pooling PR 👍 I committed your suggestion now. |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
internal/services/crossseed/matching_discovery_test.go (1)
15-311: Consider consolidating these discovery cases into table-driven tests.The scenarios are clear, but this file now has repeated setup/assert patterns that will be easier to extend/maintain as a table-driven suite.
As per coding guidelines: "Prefer table-driven test cases in Go tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching_discovery_test.go` around lines 15 - 311, Consolidate the many similar TestReleasesMatchDiscovery_* tests into one or more table-driven tests: define a slice of test cases with fields (name, svc setup including Service, NewReleaseCache and stringNormalizer via stringutils.NewDefaultNormalizer, source and candidate *rls.Release or raw parse strings, expected bool, which matcher to call: releasesMatchDiscovery or releasesMatchDiscoveryWithContext, and optional resolutionMatchContext args), then iterate cases with t.Run(c.name, func(t *testing.T){ t.Parallel(); build svc and inputs per case; call the appropriate method (releasesMatchDiscovery or releasesMatchDiscoveryWithContext) and assert the expected result (use require.True/False or require.NotPanics for the fallback test). Keep unique symbols: Service, releasesMatchDiscovery, releasesMatchDiscoveryWithContext, NewReleaseCache, resolutionMatchContext, and stringutils.NewDefaultNormalizer to locate and wire the existing logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/services/crossseed/matching.go`:
- Around line 440-475: discoveryMetadataMatch currently omits CRC/Sum checks
allowing mismatched releases to match; after the version checks in
discoveryMetadataMatch, normalize source.Sum and candidate.Sum using the same
normalizer (e.g., sourceSum := normalizer.Normalize(source.Sum), candidateSum :=
normalizer.Normalize(candidate.Sum)) and enforce the same logic as version: if
(sourceSum == "") != (candidateSum == "") return false; if sourceSum != "" &&
candidateSum != "" && sourceSum != candidateSum return false; this ensures
discovery path respects Sum compatibility like strict matching.
---
Nitpick comments:
In `@internal/services/crossseed/matching_discovery_test.go`:
- Around line 15-311: Consolidate the many similar TestReleasesMatchDiscovery_*
tests into one or more table-driven tests: define a slice of test cases with
fields (name, svc setup including Service, NewReleaseCache and stringNormalizer
via stringutils.NewDefaultNormalizer, source and candidate *rls.Release or raw
parse strings, expected bool, which matcher to call: releasesMatchDiscovery or
releasesMatchDiscoveryWithContext, and optional resolutionMatchContext args),
then iterate cases with t.Run(c.name, func(t *testing.T){ t.Parallel(); build
svc and inputs per case; call the appropriate method (releasesMatchDiscovery or
releasesMatchDiscoveryWithContext) and assert the expected result (use
require.True/False or require.NotPanics for the fallback test). Keep unique
symbols: Service, releasesMatchDiscovery, releasesMatchDiscoveryWithContext,
NewReleaseCache, resolutionMatchContext, and stringutils.NewDefaultNormalizer to
locate and wire the existing logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 08252752-a46c-4013-bb02-eddf827e2429
📒 Files selected for processing (3)
internal/services/crossseed/matching.gointernal/services/crossseed/matching_discovery_test.gointernal/services/crossseed/service.go
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/services/crossseed/service.go
| func discoveryMetadataMatch(s *Service, source, candidate *rls.Release, sourceCtx, candidateCtx resolutionMatchContext) bool { | ||
| normalizer := s.stringNormalizer | ||
| if normalizer == nil { | ||
| normalizer = stringutils.DefaultNormalizer | ||
| } | ||
|
|
||
| sourceGroup := normalizer.Normalize(source.Group) | ||
| candidateGroup := normalizer.Normalize(candidate.Group) | ||
| if sourceGroup != "" && candidateGroup != "" { | ||
| if !discoveryReleaseGroupCompatible(normalizer, source.Group, candidate.Group) { | ||
| return false | ||
| } | ||
| } else if !discoveryReleaseGroupCompatible(normalizer, source.Site, candidate.Site) { | ||
| return false | ||
| } | ||
|
|
||
| sourceSource := normalizeSource(source.Source) | ||
| candidateSource := normalizeSource(candidate.Source) | ||
| if !sourcesCompatible(sourceSource, candidateSource) { | ||
| return false | ||
| } | ||
|
|
||
| if !resolutionsCompatible(normalizer, source, candidate, sourceCtx, candidateCtx) { | ||
| return false | ||
| } | ||
|
|
||
| sourceVersion := normalizer.Normalize(source.Version) | ||
| candidateVersion := normalizer.Normalize(candidate.Version) | ||
| if (sourceVersion == "") != (candidateVersion == "") { | ||
| return false | ||
| } | ||
| if sourceVersion != "" && candidateVersion != "" && sourceVersion != candidateVersion { | ||
| return false | ||
| } | ||
|
|
||
| return true |
There was a problem hiding this comment.
Discovery metadata path currently skips CRC (Sum) compatibility.
Strict matching rejects differing/missing Sum when source has one, but discovery mode no longer checks it. That widens discovery to files known to be different (especially anime CRC-tagged releases).
💡 Proposed fix
func discoveryMetadataMatch(s *Service, source, candidate *rls.Release, sourceCtx, candidateCtx resolutionMatchContext) bool {
@@
sourceSource := normalizeSource(source.Source)
candidateSource := normalizeSource(candidate.Source)
if !sourcesCompatible(sourceSource, candidateSource) {
return false
}
+
+ sourceSum := normalizer.Normalize(source.Sum)
+ candidateSum := normalizer.Normalize(candidate.Sum)
+ if sourceSum != "" {
+ if candidateSum == "" || sourceSum != candidateSum {
+ return false
+ }
+ }
@@
if !resolutionsCompatible(normalizer, source, candidate, sourceCtx, candidateCtx) {
return false
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/services/crossseed/matching.go` around lines 440 - 475,
discoveryMetadataMatch currently omits CRC/Sum checks allowing mismatched
releases to match; after the version checks in discoveryMetadataMatch, normalize
source.Sum and candidate.Sum using the same normalizer (e.g., sourceSum :=
normalizer.Normalize(source.Sum), candidateSum :=
normalizer.Normalize(candidate.Sum)) and enforce the same logic as version: if
(sourceSum == "") != (candidateSum == "") return false; if sourceSum != "" &&
candidateSum != "" && sourceSum != candidateSum return false; this ensures
discovery path respects Sum compatibility like strict matching.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
internal/services/crossseed/matching.go (1)
440-475:⚠️ Potential issue | 🟠 MajorAdd CRC (
Sum) compatibility to discovery metadata matching.
discoveryMetadataMatchcurrently skipsSumchecks, so CRC-tagged releases that are known-different can still pass discovery filtering.💡 Proposed fix
func discoveryMetadataMatch(s *Service, source, candidate *rls.Release, sourceCtx, candidateCtx resolutionMatchContext) bool { @@ if !resolutionsCompatible(normalizer, source, candidate, sourceCtx, candidateCtx) { return false } + + sourceSum := normalizer.Normalize(source.Sum) + candidateSum := normalizer.Normalize(candidate.Sum) + if sourceSum != "" { + if candidateSum == "" || sourceSum != candidateSum { + return false + } + } sourceVersion := normalizer.Normalize(source.Version) candidateVersion := normalizer.Normalize(candidate.Version)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/matching.go` around lines 440 - 475, discoveryMetadataMatch currently ignores CRCs (Sum), allowing releases tagged with differing CRCs to match; add a Sum compatibility check similar to the Version check: use the existing normalizer (s.stringNormalizer or stringutils.DefaultNormalizer) to normalize source.Sum and candidate.Sum, return false if exactly one is empty, and return false if both non-empty and not equal; place this check after the Version comparison (using the same pattern as sourceVersion/candidateVersion) and reference discoveryMetadataMatch, source.Sum, candidate.Sum, and normalizer.Normalize to locate where to add it.
🧹 Nitpick comments (2)
internal/services/crossseed/service.go (2)
6042-6042: Clamp the Torznab limit to the declared single-page max.Line 6042 currently makes the request at least 500 and can exceed 500 when
opts.Limitis larger, which conflicts withmaxSinglePageTorznabSearchLimitsemantics and can inflate query load.💡 Proposed change
- requestLimit := max(opts.Limit, maxSinglePageTorznabSearchLimit) + requestLimit := maxSinglePageTorznabSearchLimit + if opts.Limit > 0 { + requestLimit = min(opts.Limit, maxSinglePageTorznabSearchLimit) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/service.go` at line 6042, The requestLimit computation uses max(opts.Limit, maxSinglePageTorznabSearchLimit) which forces a floor of maxSinglePageTorznabSearchLimit and allows values above it; change it to clamp the value by taking the minimum so requestLimit = min(opts.Limit, maxSinglePageTorznabSearchLimit) (i.e., replace max with min) so requests never exceed maxSinglePageTorznabSearchLimit; update the assignment that defines requestLimit and ensure any dependent logic still treats it as the per-page cap.
6405-6409: Pass candidate raw title context into discovery matching.You already pass source context; passing
res.Titleas candidaterawNamehelps context-aware resolution inference when parser output is sparse/noisy.💡 Proposed change
sourceResolutionCtx := resolutionMatchContext{ discLayout: sourceInfo.DiscLayout, discMarker: sourceInfo.DiscMarker, rawName: sourceTorrent.Name, } @@ candidateRelease := s.releaseCache.Parse(res.Title) - if !s.releasesMatchDiscoveryWithContext(searchRelease, candidateRelease, opts.FindIndividualEpisodes, sourceResolutionCtx, resolutionMatchContext{}) { + candidateResolutionCtx := resolutionMatchContext{ + rawName: res.Title, + } + if !s.releasesMatchDiscoveryWithContext(searchRelease, candidateRelease, opts.FindIndividualEpisodes, sourceResolutionCtx, candidateResolutionCtx) { releaseFilteredCount++ continue }Also applies to: 6424-6424
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/services/crossseed/service.go` around lines 6405 - 6409, When building the candidate resolution context for discovery matching, populate resolutionMatchContext.rawName with the candidate's raw title (res.Title) instead of leaving it empty or using other fields; update the candidateResolutionCtx construction (the resolutionMatchContext used for the candidate match near where sourceResolutionCtx is created) to set rawName: res.Title so the matcher gets candidate raw-title context—apply the same change at the other candidate context creation site referenced nearby.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/services/crossseed/matching.go`:
- Around line 440-475: discoveryMetadataMatch currently ignores CRCs (Sum),
allowing releases tagged with differing CRCs to match; add a Sum compatibility
check similar to the Version check: use the existing normalizer
(s.stringNormalizer or stringutils.DefaultNormalizer) to normalize source.Sum
and candidate.Sum, return false if exactly one is empty, and return false if
both non-empty and not equal; place this check after the Version comparison
(using the same pattern as sourceVersion/candidateVersion) and reference
discoveryMetadataMatch, source.Sum, candidate.Sum, and normalizer.Normalize to
locate where to add it.
---
Nitpick comments:
In `@internal/services/crossseed/service.go`:
- Line 6042: The requestLimit computation uses max(opts.Limit,
maxSinglePageTorznabSearchLimit) which forces a floor of
maxSinglePageTorznabSearchLimit and allows values above it; change it to clamp
the value by taking the minimum so requestLimit = min(opts.Limit,
maxSinglePageTorznabSearchLimit) (i.e., replace max with min) so requests never
exceed maxSinglePageTorznabSearchLimit; update the assignment that defines
requestLimit and ensure any dependent logic still treats it as the per-page cap.
- Around line 6405-6409: When building the candidate resolution context for
discovery matching, populate resolutionMatchContext.rawName with the candidate's
raw title (res.Title) instead of leaving it empty or using other fields; update
the candidateResolutionCtx construction (the resolutionMatchContext used for the
candidate match near where sourceResolutionCtx is created) to set rawName:
res.Title so the matcher gets candidate raw-title context—apply the same change
at the other candidate context creation site referenced nearby.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: b309b347-5457-4c8e-9308-d219cbd02eed
📒 Files selected for processing (3)
internal/services/crossseed/matching.gointernal/services/crossseed/matching_discovery_test.gointernal/services/crossseed/service.go
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/services/crossseed/matching_discovery_test.go
Later stages correctly check disc layouts and stop false matching. So this pr is the correct solution imo, since it correctly passes matching results through this stage, when the sourceHDR is missing (as is often the case with BDMV torrent names), but the candidateHDR has the correct HDR type (as is often the case when content is correctly titled on site). The only downside is more torrent fetching, but the scope is limited to BDMV when the torrent name lacks HDR, and fits the intent of the PR. |
Split cross-seed release matching into strict identity checks and looser discovery checks. RSS automation, webhook prechecks, and remote search candidate filtering now tolerate missing groups and some title noise without widening dedup, local match detection, or indexer filtering.
Summary by CodeRabbit
Refactor
Tests