-
Notifications
You must be signed in to change notification settings - Fork 161
Cedarling Designer authorize_multi_issuer: requirements
This feature extends the Cedarling authorization engine to accept multiple JWT tokens in a single authorization request. Currently, the Cedarling only accepts one token per type (e.g., one access_token or one id_token) per request. This limitation prevents real-world scenarios where users or workloads need to present multiple tokens from different issuers.
The solution introduces a new authorize_multi_issuer method that accepts an array of JWT tokens, each with an explicit mapping type (like "Jans::Access_Token" or custom types like "Acme::DolphinToken"). The system validates all tokens, dynamically creates Cedar entities for each token type, and joins tokens of the same type from the same issuer for efficient policy evaluation. This enables complex authorization scenarios involving multiple token sources.
User Story: As a developer integrating with the Cedarling, I want to send multiple tokens of the same or different types in a single authorization request, so that I can support complex authorization scenarios involving multiple issuers and token types.
- WHEN a developer calls the new multi-token authorization method THEN the system SHALL accept an array of tokens with their associated types
- WHEN multiple tokens of the same type are provided THEN the system SHALL process all tokens rather than only the last one
- WHEN tokens from different issuers are provided THEN the system SHALL validate and process each token according to its issuer's configuration
- WHEN no tokens are provided in the array THEN the system SHALL return an appropriate error response
User Story: As a policy author, I want to define custom token types beyond the standard ones (access_token, id_token, etc.), so that I can create policies for domain-specific tokens like "dolphin_token" or "healthcare_consent_token".
- WHEN a custom token type is provided THEN the system SHALL map the token claims to the corresponding Cedar schema defined in the policy store
- WHEN a token type is not recognized THEN the system SHALL attempt to infer the type from token contents or return an appropriate error
- WHEN standard token types (id_token, access_token, userinfo_token, tx_token) are used THEN the system SHALL apply their base schema plus any additional claims
- WHEN token type mapping fails THEN the system SHALL provide clear error messages indicating the mapping failure
User Story: As a policy author, I want to write Cedar policies that can reason about multiple tokens using ergonomic syntax, so that I can create complex authorization rules that are easy to read and maintain.
- WHEN multiple tokens are provided THEN policies SHALL have access to a flattened token collection context with predictable field names based on trusted issuer names and token types
- WHEN policies reference token claims THEN the system SHALL provide access via Cedar tag syntax like
context.tokens.acme_access_token.hasTag("scope") && context.tokens.acme_access_token.tag("scope").contains("read:profile") - WHEN policies need to validate token combinations THEN the system SHALL support cross-token validation using the flattened structure with consistent Set-based claim access
- WHEN policies check for token existence THEN the system SHALL support syntax like
context has tokens.acme_access_token - WHEN policies work with numeric claims THEN the system SHALL support Set of Long operations like
context.tokens.gov_access_token.tag("clearance_levels").containsAny([5, 6, 7])
User Story: As a developer, I want a simple and intuitive API for sending multiple tokens with explicit mapping information, so that I can easily integrate multi-token authorization into my applications.
- WHEN sending tokens THEN the API SHALL accept an array format like
[{"mapping": "Jans::Access_Token", "payload": "eyJhb...."}, {"mapping": "Acme::DolphinToken", "payload": "e3gh3...."}] - WHEN mapping is provided THEN the system SHALL use it to create the appropriate Cedar entity type
- WHEN arbitrary mapping types are used (e.g., "Acme::DolphinToken") THEN the system SHALL dynamically create entities without requiring pre-defined schemas
- WHEN issuer information is needed THEN the system SHALL extract it from the
issclaim in each JWT payload and resolve it against trusted issuer metadata - WHEN issuer auto-discovery is needed THEN the system SHALL automatically fetch OpenID Connect Discovery metadata for new issuers and cache it in trusted issuer configuration
User Story: As a policy evaluation engine, I want to dynamically create Cedar entities from any token mapping type and join tokens of the same type from the same issuer, so that policies can efficiently query token information with predictable naming.
- WHEN tokens are processed THEN the system SHALL create Cedar entities dynamically based on the mapping string without requiring pre-defined entity types
- WHEN multiple tokens of the same type from the same issuer are provided THEN the system SHALL join them into a single entity with all claims combined into Sets using union operations
- WHEN joining tokens THEN the system SHALL union all claim values into Sets, preserving all values from all tokens without replacement, creating comprehensive Sets even for traditionally single-valued claims
- WHEN creating token collections THEN the system SHALL use secure flattened naming convention like
{trusted_issuer_name}_{token_type_simplified}based on trusted issuer metadata (e.g., "acme_access_token", "dolphin_id_token") - WHEN storing claims THEN the system SHALL store all JWT claims as Cedar entity tags with Set of String as the default type for consistent interface
- WHEN Cedar schema is defined THEN the system SHALL support proper data type casting (DateTime, Long, Boolean) for enhanced type safety
User Story: As a developer, I want clear error log messages when token processing fails, so that I can quickly identify and resolve integration issues.
- WHEN token validation fails THEN the system SHALL provide specific error messages indicating which token and what validation failed
- WHEN token parsing fails THEN the system SHALL indicate the problematic token and parsing error
- WHEN issuer validation fails THEN the system SHALL specify which issuer configuration is missing or invalid
- WHEN policy evaluation encounters token-related errors THEN the system SHALL provide context about which tokens were involved
User Story: As a developer integrating with the Cedarling in a federated environment, I want to present tokens from multiple identity providers in a single request, so that I can access resources that require validation from multiple sources.
- WHEN tokens from multiple issuers are provided THEN the system SHALL validate each against its respective issuer configuration
- WHEN federation scenarios require multiple ID tokens THEN the system SHALL support validation of tokens from different identity providers
- WHEN cross-issuer validation is needed THEN policies SHALL be able to reference claims from tokens issued by different providers
- WHEN issuer trust relationships exist THEN the system SHALL respect configured trust policies
User Story: As a system administrator, I want multi-token authorization to perform efficiently, so that it doesn't significantly impact application response times.
- WHEN processing multiple tokens THEN the system SHALL validate tokens in parallel where possible
- WHEN token caching is available THEN the system SHALL leverage caching to avoid redundant validation
- WHEN large numbers of tokens are provided THEN the system SHALL handle them efficiently without memory issues
- WHEN performance monitoring is enabled THEN the system SHALL provide metrics on multi-token processing times
User Story: As a developer integrating with the Cedarling, I want the Cedarling to follow a consistent three-phase token validation workflow for processing multiple tokens, so that all valid tokens are properly processed while invalid tokens are excluded without blocking execution.
- WHEN tokens are received THEN the system SHALL perform signature validation to verify JWT cryptographic integrity using existing Cedarling token validation capabilities
- WHEN signature validation passes THEN the system SHALL perform content validation checking standard JWT claims like
exp,nbf,iat, and other time-based constraints - WHEN content validation passes THEN the system SHALL perform status check against OAuth Status List JWT to verify the token is active and not revoked
- WHEN any validation phase fails for a token THEN the system SHALL log the specific validation failure with detailed explanation but SHALL continue processing remaining tokens
- WHEN a token fails any validation phase THEN the system SHALL NOT include the failed token in Cedar entity mapping or memory
- WHEN token joining occurs and some tokens fail validation THEN the system SHALL proceed with joining only the valid tokens without blocking execution
- WHEN extracting claims from valid tokens THEN the system SHALL preserve all JWT claims without interpretation as raw data
- WHEN creating entities THEN the system SHALL convert all JWT claims to Cedar-compatible attributes in a generic Token entity structure
User Story: As a security-conscious developer, I want the system to use secure field naming conventions that prevent issuer confusion and potential security vulnerabilities, so that tokens from different identity providers cannot be mistakenly accepted or confused.
- WHEN resolving issuer names THEN the system SHALL look up the issuer in trusted issuer metadata first and use the configured
namefield as the issuer prefix - WHEN an issuer is not found in trusted issuer metadata THEN the system SHALL use the complete issuer domain from the JWT
issclaim without any prefix removal or shortening - WHEN creating field names THEN the system SHALL preserve complete domain structure by converting dots to underscores and converting to lowercase without removing any domain components
- WHEN processing issuer domains THEN the system SHALL NOT remove common prefixes like "auth", "login", "www", or "sso" to prevent potential security vulnerabilities where different issuers could be confused
- WHEN field name collisions occur THEN the system SHALL join tokens of the same type from the same issuer rather than creating separate fields
- WHEN logging field name resolution THEN the system SHALL log the complete mapping from original issuer to final field name for security auditing