Skip to content

Add benchmark tests for serialization/deserialization performance #21

@Sukuna0007Abhi

Description

@Sukuna0007Abhi

Problem

The CMW library currently lacks performance benchmarks, which makes it difficult to:

  • Establish performance baselines for serialization/deserialization operations
  • Identify performance bottlenecks in critical paths
  • Prevent performance regressions during development
  • Compare performance across different formats (JSON vs CBOR)
  • Optimize memory allocations and processing efficiency

Proposed Solution

Add comprehensive benchmark tests covering the main performance-critical operations:

Core Benchmarks Needed:

  1. Serialization Benchmarks:

    • BenchmarkCMW_MarshalJSON - JSON marshaling for monads and collections
    • BenchmarkCMW_MarshalCBOR - CBOR marshaling for monads and collections
    • BenchmarkCollection_MarshalJSON - Collection-specific JSON marshaling
    • BenchmarkCollection_MarshalCBOR - Collection-specific CBOR marshaling
  2. Deserialization Benchmarks:

    • BenchmarkCMW_UnmarshalJSON - JSON unmarshaling
    • BenchmarkCMW_UnmarshalCBOR - CBOR unmarshaling
    • BenchmarkCMW_Deserialize - Auto-detection and deserialization
  3. Specialized Operations:

    • BenchmarkCMW_EncodeX509Extension - X.509 extension encoding
    • BenchmarkCMW_DecodeX509Extension - X.509 extension decoding
    • BenchmarkCMW_SignedCBOR - Signed CBOR operations (if applicable)
  4. Data Size Variations:

    • Small payloads (< 1KB)
    • Medium payloads (1KB - 100KB)
    • Large payloads (> 100KB)
    • Nested collections with varying depths

Implementation Details:

  • Use Go's built-in testing package benchmark framework
  • Include memory allocation tracking (b.ReportAllocs())
  • Test with realistic data sizes and structures
  • Separate benchmarks for different CMW formats and operations
  • Include benchmarks for both simple monads and complex nested collections

Expected Benefits:

  • Establish performance baselines for current implementation
  • Enable data-driven optimization decisions
  • Catch performance regressions in CI/CD
  • Help users understand performance characteristics
  • Guide future optimization efforts

This would be particularly valuable for applications processing large volumes of attestation data where performance is critical.

Acceptance Criteria

  • Benchmark tests added to appropriate test files
  • Benchmarks cover all major serialization/deserialization paths
  • Memory allocation tracking included
  • Benchmarks test various data sizes and complexity levels
  • CI integration to track performance over time (optional)
  • Documentation on how to run and interpret benchmarks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions