Replies: 1 comment
-
|
@aydarng thank you for introducing this discussion. I agree with a lot of your analysis. Some important wordings/PoV changes:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
To determine the optimal scaling strategy for the vlei-verifier, I recommend a structured approach divided into Foundation Steps and Solution Steps. The foundation steps focus on gathering critical information and understanding the application's limits, while the solution steps outline actionable strategies to address scaling challenges.
Foundation Steps
1. Define Non-Functional Requirements
Before implementing any scaling solution, it’s essential to establish clear non-functional requirements for the vlei-verifier in production. This includes:
Expected Average RPS:
What is the anticipated requests per second (RPS) the application needs to handle?
Total Number of Users:
How many users are expected to interact with the vlei-verifier in production?
Data Retention Period:
How long should presented credential data and account data be stored in the database? (Currently set to 20 minutes.)
These metrics will help estimate:
RAM and CPU requirements for the environment.
Storage size needed for the LMDB database.
Example:
2. Conduct Load Testing
To understand the vlei-verifier’s performance limits, conduct load testing:
Objective:
Determine the maximum RPS the application can handle without performance degradation.
Measure request-response timeouts under varying loads.
Metrics to Monitor:
CPU and memory usage.
Response times
Error rates under high load.
Solution Steps
Based on the findings from the foundation steps, I propose two distinct solutions to scale the vlei-verifier.
Solution 1: Single Pod Deployment with Vertical Scaling
This solution works within the constraints of LMDB by deploying a single instance of the vlei-verifier and scaling it vertically.
Steps:
Implement Single Pod Deployment
Deploy the vlei-verifier as a StatefulSet or Deployment with a replica count of 1.
Use a ReadWriteOnce (RWO) Persistent Volume (PV) for the LMDB database file
Configure Load Balancer
Create a Kubernetes Service of type
LoadBalancerfor the vlei-verifier application.The AWS load balancer (ALB or NLB) will route traffic to the single vlei-verifier pod.
Configure health checks to ensure the pod remains healthy and responsive.
Optimize Vertical Scaling
Scale the application vertically by increasing the CPU and memory resources allocated to the pod.
Use larger instance types for the worker node hosting the pod to ensure sufficient resources.
Solution 2: Horizontal Scaling with vlei-router(or vlei-load-balancer)
This solution introduces an additional service, vlei-router, to enable horizontal scaling by routing traffic to multiple vlei-verifier instances, each with its own LMDB.
Steps:
Deploy Multiple vlei-verifier Instances
Introduce vlei-router
Deploy a new service, vlei-router, which acts as a load balancer and traffic router.
The orchestrator will route incoming requests to the appropriate vlei-verifier instance based on the requester’s AID (or SAID).
Configure Load Balancer for vlei-router
Create a Kubernetes Service of type
LoadBalancerfor the vlei-router.The AWS load balancer (ALB or NLB) will route traffic to the vlei-router, which will then forward requests to the correct vlei-verifier instance.
Comparison of Solutions
If after load tests we will see that the single vlei-verifier instance will meet Non-Functional Requirements we can use the 1st scaling approach since the vertical scaling will be enough in this case. Otherwise we will have to work on the 2nd scaling approach which will require the vlei-router implementation.
What do you think @2byrds, @ronakseth96 ?
Beta Was this translation helpful? Give feedback.
All reactions