At Softnetics Team, we specialize in building software using TypeScript. Data validation is a crucial part of our work, as we need to verify and convert user input into the correct format. Initially, we chose Zod as our schema validation library. However, as our applications grew larger, we noticed high CPU and memory usage during TypeScript compilation, particularly in development. Our research revealed that these performance issues stemmed from the validation library itself. This led us to conduct a benchmark study comparing CPU and memory usage across different schema validation libraries.
Warning
The summary will be updated in the future (due to the upcoming release of Zod 4).
- NOTE
- Environment
- Test Cases Explanation
- Running the benchmark
- Benchmark Result
- Summary
- Add more testcases or candidates
- Run the benchmark yourself
Component | Version | Note |
---|---|---|
OS | ubuntu-22.04 vCPU 4 RAM 16GB | GitHub Action |
Node.js | 20.16.0 | |
Bun | 1.1.10 | |
Pnpm | 9.14.2 | |
Python | 3.13.0 |
Library | Version |
---|---|
Zod | 3.24.1 |
Zod4 | 4.0.0-beta.20250414T061543 |
typebox | 0.34.14 |
arktype | 2.0.3 |
valibot | 1.0.0-beta.14 |
yup | 1.6.1 |
@effect/schema | 0.75.5 |
For this benchmark, we generated test cases using typebox-codegen. Each test case was carefully crafted with specific objectives and varying levels of complexity. To ensure a fair and meaningful comparison, we verified that all libraries produced identical TypeScript type outputs. You can find all the test cases in the samples directory.
The "simple" test case is a basic schema with a single object containing a string and array field.
The "extend" test case is a schema use "extend" feature of each library.
The "union" test case is a schema use "union" feature of each library. Especially, discriminate union.
The "complex" test case is a schema use common Typescript type helper e.g. Extract
, Omit
, Union
, Extend
, etc. combined all of the above together.
This test cases will use transform
feature of each library and will infer Input and Output type. As of now, only Zod and Valibot support this feature.
For further information about the data preparation can be found in the Samples README.md.
The benchmark is run by GitHub Action with the following step.
-
Generate the test cases for each library using typebox-codegen.
-
Run the benchmark using
tsc --extendedDiagnostics
each generated files to get the semantic diagnostics e.g. memory usage, compile time, etc.- After this step we will get the result in the
./samples/__benchmarks__
directory. which contains the output of thetsc --extendedDiagnostics
command. For more information about the output, you can refer to the TypeScript documentation.
- After this step we will get the result in the
-
Read the result and generate the report using Pandas and Matplotlib.
The TypeScript candidate performs the best across all metrics, as it leverages native TypeScript types. However, it is not a schema validation library. It can serve as a reference point for comparing the performance of other libraries.
Check time refers to the duration taken by the TypeScript compiler to check and infer the types of the program. A lower check time indicates better performance and an enhanced developer experience.
Zod exhibits the highest check time, which is approximately twice as long as the second-highest check time, Effect , in all test cases. Both Valibot and Effect show similar check times, but they are about twice as long as the other libraries.
Parse time is the time required by the TypeScript compiler to generate Abstract Syntax Trees (ASTs) for the program. A lower parse time generally indicates simpler code, which improves editor performance and enhances the developer experience.
As shown, Effect consistently has the highest parse time across all test cases, suggesting that it has the most complex type system. The other libraries show no significant difference in parse time compared to TypeScript, which serves as the baseline.
Memory usage refers to the amount of memory consumed by the TypeScript compiler when executing the tsc
command. This metric is crucial for developer experience.
As depicted, most libraries exhibit similar memory usage across all test cases, with the exception of Effect, which consistently uses the most memory. The other libraries do not show a significant difference in memory usage when compared to TypeScript, the baseline.
The number of types indicates how many types are generated by the schema file. A lower number of types leads to better performance, as the TypeScript compiler needs to check fewer types and generate smaller ASTs.
Interestingly, Zod stands out as an outlier in this metric, while the other libraries exhibit little variation in the number of types, compared to TypeScript.
The best schema validation library is Yup and io-ts due to its low memory usage and quick compile time. However, it's important to note that they lacks certain advanced features, such as transform
, refine
, and discriminatedUnion
, which are commonly used in real-world applications. Therefore, the best schema validation library for the Softnetics Team would be Valibot, as it is the fastest feature-rich library available and Typescript-first schema validation like Zod.
Unfortunately, the Valibot community is not as large as that of Zod. If having a strong community is a priority, Zod remains the best option for now.
Let me know if you need further refinements!
The best schema validation libraries in terms of performance are Yup and io-ts, owing to their low memory usage and quick compilation times. However, they lack certain advanced features, such as transform
, refine
, and discriminatedUnion
, which are often required in real-world applications. Therefore, for the Softnetics Team, the ideal choice would be Valibot, as it combines speed, rich feature support, and a TypeScript-first approach, similar to Zod.
It’s worth noting that the Valibot community is smaller than that of Zod. If a strong community is a priority, Zod remains the best option for now.
Let me know if you need further refinements!
To add more testcases ou can follow the steps below.
- Add a new test case in the samples directory. common directory contains common test cases that can be used in multiple libraries. custom directory contains test cases that are specific to a library.
- Open Pull Request to this repository.
To add more candidates you can follow the steps below.
- Go to src/constants/library.ts and add a new candidate.
- Open Pull Request to this repository.
To run the benchmark yourself, you can follow the step which described in GitHub Action file.