Skip to content

implement participant migration procedure to enable LSU#5120

Merged
mblaze-da merged 3 commits intomainfrom
mblaze-da/participant-migration/4173
Apr 21, 2026
Merged

implement participant migration procedure to enable LSU#5120
mblaze-da merged 3 commits intomainfrom
mblaze-da/participant-migration/4173

Conversation

@mblaze-da
Copy link
Copy Markdown
Contributor

@mblaze-da mblaze-da commented Apr 20, 2026

The migration procedure is done in two steps:

  1. Import participant's DB resources in the sv stacks and prepare them to be removed from the sv-canton stacks. To do this:
    • deploy once with synchronizerMigration.active.migrateParticipantsFromSvCantonToSv set to true
  2. Remove participant's DB resources from the sv-canton stacks, down participant chart from sv-canton and bring it up from the sv stack. Restarting the participant is also necessary due to DB user's password change. The password cannot be migrated. To do this:
    • prepare post migration config (these changes are permanent)
      • remove synchronizerMigration.active.migrateParticipantsFromSvCantonToSv
      • set synchronizerMigration.active.enableLogicalSynchronizerDeploymentMode to true
      • make sure that synchronizerMigration.frozenMigrationId is set
    • deploy once to finish migrating

[static]

Signed-off-by: Mateusz Błażejewski <mateusz.blazejewski@digitalasset.com>
[static]

Signed-off-by: Mateusz Błażejewski <mateusz.blazejewski@digitalasset.com>
@github-actions
Copy link
Copy Markdown

[backport] Reminder

Please consider backporting to the following branches:

  • release-line-0.5.18
  • release-line-0.5.17
  • release-line-0.5.16
  • release-line-0.5.15

▶️ Please check the boxes for branches that you wish to backport to and backport PRs will
automatically be created when you merge this PR.

And your PR is currently against base branch: main.

Note: Any PR comment containing [backport] will be considered for auto-backporting upon merge,
you can always add those manually for PRs that did not get these reminders. You can also edit
this comment manually and add more branches that this should be backported to.

@mblaze-da mblaze-da marked this pull request as draft April 20, 2026 09:12
@mblaze-da mblaze-da marked this pull request as ready for review April 20, 2026 09:21
@moritzkiefer-da
Copy link
Copy Markdown
Contributor

The migration procedure looks as follows:

Can you describe what happens in each step? It's a bit tricky to follow in the code. My understanding is:

  1. Step 1 sets retain on delete on the DB. I think you are also assuming that in this case the sv stack gets deployed and can import the DB from the sv-canton outputs? If so, that seems like an essential step. I think the participant still gets deployed from sv-canton?
  2. Step 2 (and step 3, afaict this is only one PR so not sure why it's 2 steps?) then actually removes theDB from the sv-canton stack which is fine due to retainOnDelete moves the participant chart to be deployed from sv?

// the following will not be needed when the MIGRATION_ID gets removed from defaults
databaseName: `participant${migrationSuffix(migration?.id, '_')}`,
databaseName:
participant?.legacyDatabaseName ?? `participant${migrationSuffix(migration?.id, '_')}`,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I understand why we need this. Can we not rely on the frozenMigrationId field?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right.

};
migratingDatabaseInstanceName?: string;
migratingDatabaseSecretName?: string;
yieldManagement?: boolean;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we use yieldManagement for more than setting retainOnDelete? If not maybe just call it retainOnDelete?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used both on retainOnDelete and disabling the Pulumi protect option. The idea behind the name was that this flag in some sense makes the current stack yield the management of these resources. It does not yield it completely as a hypothetical modification afterwards would still be applied but it stops managing deletion. Now that I had to explain this I'm starting to lean towards something more specific like retainDbResourcesOnDelete.

existingInstanceName !== undefined
? {
import: existingInstanceName,
ignoreChanges: ['userLabels'],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do these change?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because in the sv stack I generally tried to avoid using migration ID if it's not necessary. For example the participant instance is just called participant instead of participant-0. This also means that the migration_id label is not present in the sv stack deployment. To get the resource import to work I have to ignore this change as resource state and definition must be identical during the import. During the subsequent deployment (step 2) this label is actually removed.

@mblaze-da
Copy link
Copy Markdown
Contributor Author

The migration procedure looks as follows:

Can you describe what happens in each step? It's a bit tricky to follow in the code. My understanding is:

1. Step 1 sets retain on delete on the DB. I think you are also assuming that in this case the sv stack gets deployed and can import the DB from the sv-canton outputs? If so, that seems like an essential step. I think the participant still gets deployed from sv-canton?

2. Step 2 (and step 3, afaict this is only one PR so not sure why it's 2 steps?) then actually removes theDB from the sv-canton stack which is fine due to retainOnDelete moves the participant chart to be deployed from sv?

@moritzkiefer-da your analysis is pretty much spot on. I've updated the PR description to make it more clear.

Copy link
Copy Markdown
Contributor

@moritzkiefer-da moritzkiefer-da left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! lgtm minus the frozen migration id change

Copy link
Copy Markdown
Contributor

@nicu-da nicu-da left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Nothing else to add compared to @moritzkiefer-da 's comments. Lets make sure we use the frozen migration id to avoid needing any other config changes for the migration.

Comment thread cluster/pulumi/common/src/secrets.ts Outdated
password: pulumi.Input<string>,
secretName: string,
existingSecretName?: string,
yieldManagement: boolean = false
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the yieldManagement naming?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I renamed this.

…agement

[static]

Signed-off-by: Mateusz Błażejewski <mateusz.blazejewski@digitalasset.com>
@mblaze-da mblaze-da requested a review from nicu-da April 21, 2026 12:14
@mblaze-da mblaze-da merged commit 02fb883 into main Apr 21, 2026
54 checks passed
@mblaze-da mblaze-da deleted the mblaze-da/participant-migration/4173 branch April 21, 2026 12:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants