Skip to content

SSR deadlocks when 2+ Suspense/Transition boundaries share 2+ Resources via .get() #4578

@alilee

Description

@alilee

Describe the bug

When two or more Suspense/Transition boundaries read the same two or more Resources using the reactive .get() pattern, SSR streaming deadlocks with certain Tokio worker_threads counts. The initial HTML with fallbacks is sent but the resolved <template> chunks are never flushed. The response hangs until timeout.

The workaround {move || Suspend::new(async move { r.await })} (awaiting Resources as Futures inside Suspend::new) resolves correctly in all configurations.

Leptos Dependencies

leptos = { version = "0.8.15" }
leptos_router = { version = "0.8" }
leptos_axum = { version = "0.8", optional = true }
axum = { version = "0.8", optional = true }
tokio = { version = "1", features = ["rt-multi-thread", "macros", "net"], optional = true }

To Reproduce

cargo run --features ssr

Then in another terminal:

# This deadlocks — Suspense boundaries never resolve:
curl -s -m 5 http://localhost:3000/broken | grep -c 'RESOLVED_RESOURCES'
# Output: 0 (curl times out, no resolved resources in response)

# This works — all boundaries resolve immediately:
curl -s -m 5 http://localhost:3000/works | grep -c 'RESOLVED_RESOURCES'
# Output: 1

Next Steps

  • I will make a PR
  • I would like to make a PR, but need help getting started
  • I want someone else to take the time to fix this
  • This is a low priority for me and is just shared for your information

Additional context

This is a deadlock in the SSR streaming renderer, not a race condition. The threading dependency pattern suggests two Suspense boundaries' reactive subscriptions create a circular wait when their tasks are distributed across 2-3 Tokio worker threads:

  • With 1 thread (or current_thread): all tasks run cooperatively on one thread, so no cross-thread contention is possible — tasks are polled round-robin and all complete.
  • With 2-3 threads: tasks from the two boundaries get distributed across threads and create a circular dependency — each thread holds a resource that the other needs, causing a classic deadlock.
  • With 4+ threads: enough worker threads are available that the scheduler can make progress despite the contention — the circular dependency doesn't form.

The Suspend::new(async move { r.await }) workaround serializes Resource access within each boundary (sequential awaits within a single async block), breaking the circular dependency that causes the deadlock.

On AWS Lambda (128MB default, 2 visible cores), this manifested as intermittent 500 errors (~40-60% failure rate) rather than a consistent deadlock, likely because Lambda's heavy CPU throttling (~0.07 vCPU) introduces timing variability that sometimes breaks the deadlock. Locally with worker_threads = 2 and full CPU, it deadlocks 100% of the time.

Files

ISSUE.md
Cargo.toml
main.rs

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions