中文 ZRPC is a lightweight, high-performance RPC framework inspired by gRPC and rpcx. It provides a simple API while maintaining excellent performance and reliability.
- High-performance TCP long connection communication
- Protocol Buffer serialization support
- Multiple service discovery methods (Memory, ETCD)
- Client-side load balancing
- Connection pool reuse
- Efficient worker pool design
- Heartbeat mechanism
- Middleware support
- TLS security transport
- Comprehensive metrics
- Connection multiplexing (allowing a single TCP connection to handle multiple concurrent requests)
ZRPC uses a custom binary protocol with the following header design:
+-------+--------+----------+------------+---------+
| Magic | Ver | Msg Type | Comp Type | Seq ID |
+-------+--------+----------+------------+---------+
| 1B | 1B | 1B | 1B | 8B |
+-------+--------+----------+------------+---------+
- Magic: Magic number for ZRPC protocol identification
- Version: Protocol version number
- Message Type: Message type (request/response)
- Compress Type: Compression type
- Sequence ID: Request sequence number
Uses Protocol Buffers for serialization by default, with an extensible Codec interface:
type Codec interface {
Marshal(v interface{}) ([]byte, error)
Unmarshal(data []byte, v interface{}) error
}
Implements a two-tier queue connection pool:
- Hot Queue:
- Channel-based implementation
- For fast connection acquisition
- Size is half of the total pool size
- Cold Queue:
- Array-based implementation
- Serves as backup connection pool
- Supports connection expansion
Features:
- Connection prewarming
- Automatic retry
- Idle connection recycling
- Maximum connection limit
Supports multiple load balancing strategies:
- Random
- RoundRobin
- Custom extension support
Implements an efficient dynamic worker pool with:
- Adaptive Scaling:
- Dynamic expansion and contraction
- Automatic adjustment based on system load
- Minimum and maximum worker count limits
- Monitoring Metrics:
- Queue utilization
- Idle worker ratio
- CPU and memory usage
- Request latency statistics
- Optimization Strategies:
- Quick Scale Up: 20% rapid expansion for high concurrency
- Smooth Contraction: Dynamic adjustment based on load metrics
- Stack Auto-Recovery: Worker reset after 65536 requests to prevent stack growth
Complete middleware mechanism:
- Server Middleware:
type ServerMiddleware func(ctx context.Context, req interface{}, info *ServerInfo, handler Handler) (resp interface{}, err error)
- Client Middleware:
type ClientMiddleware func(ctx context.Context, method string, req, reply interface{}, cc *Client, invoker Invoker) error
- Memory Optimization:
- Object reuse with sync.Pool
- Two-tier buffer design
- Efficient memory allocation
- Concurrency Optimization:
- Goroutine pool reuse
- Connection pool management
- Request batching
- Stack auto-recovery
- Network Optimization:
- Long connection reuse
- Heartbeat mechanism
- Compression support
- Adaptive buffering
import "github.com/crazyfrankie/zrpc"
Then run:
go mod tidy
Server:
server := zrpc.NewServer(
zrpc.WithWorkerPool(100),
zrpc.WithTLSConfig(tlsConfig),
)
pb.RegisterYourServiceServer(server, &YourService{})
server.Serve("tcp", ":8080")
Client:
client := zrpc.NewClient("localhost:8080",
zrpc.DialWithMaxPoolSize(100),
zrpc.DialWithHeartbeat(30 * time.Second),
)
defer client.Close()
c := pb.NewYourServiceClient(client)
resp, err := c.YourMethod(ctx, req)
- Additional service discovery methods
- Circuit breaking and rate limiting
- Tracing and monitoring integration