Skip to content

Add num_workers to minimum schemas for cluster tables as a long #1302

@neilbest-db

Description

@neilbest-db

Overwatch Version

Issue started appearing during testing for the 0.8.2.0 release when upgrading existing deployments, but not new deployments.

Describe the bug

The working theory is that the type of num_workers changed upstream in the REST API responses. This is under active evaluation as of 2024-10-03 Thu. If so, what is happening is that the target tables were already created according to the former type of int previously received in the API response payloads but new responses cannot be merged into the table because Spark does not down-cast such types (only up-casts, like int -> long, the reverse of this scenario).

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingschema changeRequires a schema change

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions