Skip to content

Cedarling Designer authorize_multi_issuer: requirements

Michael Schwartz edited this page Sep 7, 2025 · 2 revisions

Requirements Document

Introduction

This feature extends the Cedarling authorization engine to accept multiple JWT tokens in a single authorization request. Currently, the Cedarling only accepts one token per type (e.g., one access_token or one id_token) per request. This limitation prevents real-world scenarios where users or workloads need to present multiple tokens from different issuers.

The solution introduces a new authorize_multi_issuer method that accepts an array of JWT tokens, each with an explicit mapping type (like "Jans::Access_Token" or custom types like "Acme::DolphinToken"). The system validates all tokens and dynamically creates Cedar entities for each individual token. This enables complex authorization scenarios involving multiple token sources while maintaining clear separation between individual tokens.

Requirements

Requirement 1: Multi-Token Support

User Story: As a developer integrating with the Cedarling, I want to send multiple tokens from different issuers in a single authorization request, so that I can support complex authorization scenarios.

Acceptance Criteria

  1. WHEN a developer calls the new multi-token authorization method THEN the system SHALL accept an array of tokens with their associated types
  2. WHEN multiple tokens of the same type from different issuers are provided THEN the system SHALL reject the request with an appropriate error indicating that tokens of the same type must come from a single issuer
  3. WHEN tokens from different issuers are provided with different token types THEN the system SHALL validate and process each token according to its issuer's configuration
  4. WHEN no tokens are provided in the array THEN the system SHALL return an appropriate error response

Requirement 2: Dynamic Token Type Mapping

User Story: As a policy author, I want to define custom token types beyond the standard ones (access_token, id_token, etc.), so that I can create policies for domain-specific tokens like "dolphin_token" or "healthcare_consent_token".

Acceptance Criteria

  1. WHEN a custom token type is provided THEN the system SHALL map the token claims to the corresponding Cedar schema defined in the policy store
  2. WHEN a token type is not recognized THEN the system SHALL attempt to infer the type from token contents or return an appropriate error
  3. WHEN standard token types (id_token, access_token, userinfo_token, tx_token) are used THEN the system SHALL apply their base schema plus any additional claims
  4. WHEN token type mapping fails THEN the system SHALL provide clear error messages indicating the mapping failure

Requirement 3: Enhanced Policy Evaluation with Ergonomic Syntax

User Story: As a policy author, I want to write Cedar policies that can reason about multiple individual tokens using ergonomic syntax, so that I can create complex authorization rules that are easy to read and maintain.

Acceptance Criteria

  1. WHEN multiple tokens are provided THEN policies SHALL have access to a token collection context with predictable field names based on trusted issuer names and token types
  2. WHEN policies reference token claims THEN the system SHALL provide access via Cedar tag syntax like context.tokens.acme_access_token.hasTag("scope") && context.tokens.acme_access_token.tag("scope").contains("read:profile")
  3. WHEN policies need to validate across multiple individual tokens THEN the system SHALL support cross-token validation using individual token references with consistent Set-based claim access
  4. WHEN policies check for specific token existence THEN the system SHALL support syntax like context has tokens.acme_access_token
  5. WHEN policies work with numeric claims THEN the system SHALL support Set of Long operations like context.tokens.gov_access_token.tag("clearance_levels").containsAny([5, 6, 7])

Requirement 4: Token Array API Design

User Story: As a developer, I want a simple and intuitive API for sending multiple tokens with explicit mapping information, so that I can easily integrate multi-token authorization into my applications.

Acceptance Criteria

  1. WHEN sending tokens THEN the API SHALL accept an array format like [{"mapping": "Jans::Access_Token", "payload": "eyJhb...."}, {"mapping": "Acme::DolphinToken", "payload": "e3gh3...."}]
  2. WHEN mapping is provided THEN the system SHALL use it to create the appropriate Cedar entity type
  3. WHEN arbitrary mapping types are used (e.g., "Acme::DolphinToken") THEN the system SHALL dynamically create entities without requiring pre-defined schemas and SHALL treat the JWT claims as strings
  4. WHEN issuer information is needed THEN the system SHALL extract it from the iss claim in each JWT payload and resolve it against the current trusted issuer metadata to look for a match
  5. WHEN issuer auto-discovery is needed THEN the system SHALL automatically fetch OpenID Connect Discovery metadata for new issuers and cache it in trusted issuer configuration

Requirement 5: Dynamic Entity Creation and Individual Token Processing

User Story: As a policy evaluation engine, I want to dynamically create Cedar entities from any token mapping type and process each token individually, so that policies can query specific token information with predictable naming and maintain clear token separation.

Acceptance Criteria

  1. WHEN tokens are processed THEN the system SHALL create Cedar entities dynamically based on the mapping string without requiring pre-defined entity types
  2. WHEN each valid token is processed THEN the system SHALL create a separate entity for that individual token
  3. WHEN creating token collections THEN the system SHALL use secure naming convention like {trusted_issuer_name}_{token_type_simplified} based on trusted issuer metadata (e.g., "acme_access_token", "dolphin_id_token")
  4. WHEN storing claims THEN the system SHALL store all JWT claims as Cedar entity tags with Set of String as the default type for consistent interface
  5. WHEN Cedar schema is defined THEN the system SHALL support proper data type casting (DateTime, Long, Boolean) for enhanced type safety
  6. WHEN processing individual tokens THEN the system SHALL maintain complete separation between tokens to enable precise policy evaluation

Requirement 6: Error Handling and Validation

User Story: As a developer, I want clear error log messages when token processing fails, so that I can quickly identify and resolve integration issues.

Acceptance Criteria

  1. WHEN token validation fails THEN the system SHALL provide specific error messages indicating which token and what validation failed
  2. WHEN token parsing fails THEN the system SHALL indicate the problematic token and parsing error
  3. WHEN issuer validation fails THEN the system SHALL specify which issuer configuration is missing or invalid
  4. WHEN policy evaluation encounters token-related errors THEN the system SHALL provide context about which tokens were involved

Requirement 7: Performance and Scalability

User Story: As a system administrator, I want multi-token authorization to perform efficiently, so that it doesn't significantly impact application response times.

Acceptance Criteria

  1. WHEN processing multiple tokens THEN the system SHALL validate tokens in parallel where possible
  2. WHEN token caching is available THEN the system SHALL leverage caching to avoid redundant validation
  3. WHEN large numbers of tokens are provided THEN the system SHALL handle them efficiently without memory issues
  4. WHEN performance monitoring is enabled THEN the system SHALL provide metrics on multi-token processing times

Requirement 8: Multiple Token Querying and Aggregation

User Story: As a policy author, I want to write policies that can query and reason about multiple individual tokens from different issuers and types, so that I can create authorization rules that consider all relevant tokens without manual enumeration.

Acceptance Criteria

  1. WHEN multiple tokens from different issuers and types exist THEN policies SHALL be able to query tokens using collection-based syntax
  2. WHEN policies need to check if any token from a specific issuer has a claim THEN the system SHALL support syntax like context.tokens.anyFromIssuer("acme").hasTag("scope")
  3. WHEN policies need to count tokens from a specific issuer THEN the system SHALL support syntax like context.tokens.countFromIssuer("acme") >= 1

Requirement 9: Secure Field Naming and Issuer Resolution

User Story: As a security-conscious developer, I want the system to use secure field naming conventions that prevent issuer confusion and potential security vulnerabilities, so that tokens from different identity providers cannot be mistakenly accepted or confused.

Acceptance Criteria

  1. WHEN resolving issuer names THEN the system SHALL look up the issuer in trusted issuer metadata first and use the configured name field as the issuer prefix
  2. WHEN trusted issuer metadata does not contain a name field THEN the system SHALL use the hostname from the JWT iss claim, dropping protocol (https://) and path components but preserving the complete domain structure
  3. WHEN creating field names THEN the system SHALL preserve complete domain structure by converting dots to underscores and converting to lowercase without removing any domain components
  4. WHEN creating token field names THEN the system SHALL use the issuer and token type combination to create distinct field names since each issuer can only have one token of each type
  5. WHEN logging field name resolution THEN the system SHALL log the complete mapping from original issuer to final field name for security auditing

Clone this wiki locally