Skip to content

REP-1293: Allocation group resource#507

Merged
roman-castai merged 23 commits intomasterfrom
REP-1293-support-allocation-groups
Jul 7, 2025
Merged

REP-1293: Allocation group resource#507
roman-castai merged 23 commits intomasterfrom
REP-1293-support-allocation-groups

Conversation

@roman-castai
Copy link
Copy Markdown
Contributor

@roman-castai roman-castai commented Jun 16, 2025

Example usage from the current schema:

resource "castai_allocation_group" "ag_example" {
  name = "ag_example"
  cluster_ids = [
    "1a58d6b4-bc0e-4417-b9c7-31d15c313f3f",
    "d204b988-5db5-472e-a258-bf763a0f4a93"
  ]

  namespaces = [
    "namespace-a",
    "namespace-b"
  ]

  labels = {
    environment              = "production",
    team                     = "my-team",
    "app.kubernetes.io/name" = "app-name"
  }

  labels_operator = "AND"
}

@roman-castai roman-castai self-assigned this Jun 16, 2025
case http.StatusOK:
d.SetId(*create.JSON200.Id)
return resourceAllocationGroupRead(ctx, d, meta)
// TODO (romank): do we have a status conflict in our response from API?
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conflict or similar status would be useful if someone would like to move their existing allocation groups into terraform so that we don't create duplicate allocation groups.

if len(allocationGroups) > 0 {
for _, ag := range allocationGroups {
if *ag.Name == *allocationGroupName {
return diag.Errorf("allocation group already exists")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment we do not require Unique names for Allocations Groups in the console. We should either add this check to the API as well, or remove it from here.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will check to see if there is an organization that already uses same name for different labeled allocation group and therefore the change would affect someone, and if not I would add this check to the API.

If we don't enforce this it would be possible to have a terraform configuration with the completely equal two entries, e.g.:

resource "cast_ai_allocation_group" "my_allocation_group" {
  clusterIds = ["uuid1", "uuid2"]
  namespaces = ["namespace1"]
}

resource "cast_ai_allocation_group" "my_allocation_group" {
  clusterIds = ["uuid1", "uuid2"]
  namespaces = ["namespace1"]
}

We could probably check the whole allocation group definition instead of just the name and would guard from most of the unintended use cases like:

  • Given that the id would be implicit it wouldn't be noticeable in the tf config that those are really two different allocation groups.

  • If migration of previously created allocation groups is not done through importing but just by copying everything in the tf files, the client would recreate all their allocation groups which I would say we should guard for.

But I would argue checking the name uniqueness would make sense from the start as I presume it would be confusing to have two allocation groups with the same name showing same or even different statistics.

tl;dr: I will first check if anyone uses non-unique allocation group names, if not I would enforce it.

@roman-castai roman-castai marked this pull request as ready for review June 25, 2025 20:52
@roman-castai roman-castai requested a review from a team as a code owner June 25, 2025 20:52
@roman-castai roman-castai requested a review from furkhat July 3, 2025 12:41
@roman-castai roman-castai merged commit 2f27808 into master Jul 7, 2025
10 checks passed
@roman-castai roman-castai deleted the REP-1293-support-allocation-groups branch July 7, 2025 12:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants