Skip to content

!feat: Allow Multiple Core gRPC Connections with Load Balancing #4137

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

smuu
Copy link
Member

@smuu smuu commented Feb 27, 2025

Multiple Core gRPC Connections with Client-Side Load Balancing

Motivation

Celestia nodes currently support only a single core gRPC connection, creating a potential single point of failure. This PR enhances reliability and performance by supporting multiple core endpoints with client-side load balancing.

Key Benefits

  • Improved Reliability: Node operations continue if some endpoints fail
  • Load Distribution: Requests are balanced across multiple endpoints
  • Enhanced Performance: Transaction submissions and block fetching distributed for better throughput
  • Simple Configuration: No external load balancers needed

Implementation

  • Added support for multiple endpoints in configuration
  • Implemented round-robin load balancing for block fetching and transaction submission
  • Added flexible endpoint configuration with TLS and authentication options
  • Maintained backward compatibility with existing single-endpoint configuration
  • Added IPv6 support

Usage

Single endpoint (backward compatible):

celestia bridge start --core.ip=127.0.0.1 --core.grpc.port=9090

Multiple endpoints:

celestia bridge start \
  --core.endpoints=127.0.0.1:9090 \
  --core.endpoints=127.0.0.2:9090 \
  --core.endpoints=example.com:9090:tls=true:xtoken=/path/to/token

This enhancement allows Celestia nodes to continue operating even if some core endpoints fail while balancing load for better performance and throughput.

@github-actions github-actions bot added external Issues created by non node team members kind:break! Attached to breaking PRs labels Feb 27, 2025
@smuu smuu changed the title feat: allow to configure multiple grpc connections and load balance feat: Allow Multiple Core gRPC Connections with Load Balancing Feb 27, 2025
Signed-off-by: Smuu <[email protected]>
@smuu smuu marked this pull request as draft February 27, 2025 09:59
core/fetcher.go Outdated
next := (current + 1) % int32(len(f.clients))

// Update the current client atomically to avoid race conditions
atomic.StoreInt32(&f.currentClient, next)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally this should be a CAS loop

@Wondertan
Copy link
Member

AI cooked in here :)

Is there a plan to add load balancing for transaction submission? While having it for block fetching is useful, its not the weakest link our users struggling with.

@smuu smuu changed the title feat: Allow Multiple Core gRPC Connections with Load Balancing !feat: Allow Multiple Core gRPC Connections with Load Balancing Feb 27, 2025
@smuu smuu marked this pull request as ready for review February 27, 2025 14:36
Signed-off-by: Smuu <[email protected]>
@smuu
Copy link
Member Author

smuu commented Mar 3, 2025

AI cooked in here :)

Is there a plan to add load balancing for transaction submission? While having it for block fetching is useful, its not the weakest link our users struggling with.

Added a proposal how the tx submission load balancing could look like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
external Issues created by non node team members kind:break! Attached to breaking PRs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants