Skip to content

Conversation

@Bhavna-Ramachandran
Copy link
Collaborator

@Bhavna-Ramachandran Bhavna-Ramachandran commented Dec 26, 2025

Summary

This PR is branched from PR#7435 (branch: e2e-staging) to layer additional changes without modifying the original work directly.

Changes proposed

Context for reviewers

Validation steps

@doug-s-nava doug-s-nava changed the base branch from main to e2e-staging December 30, 2025 17:07
Copy link
Collaborator

@doug-s-nava doug-s-nava left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if bumping up the timeouts is all we need to do to reliably get tests passing in staging then I'm all for it. However! Is there a way to make the timeouts dynamic based on whether we are running against staging versus local? In cases where we are running against a local server it would be nice to keep the lower timeout to keep run times down.

Can you see if that's possible and make sure that the seed script check for existing records is in line with what is happening elsewhere?

BR: Agreed. Lets key timeouts off PLAYWRIGHT_TARGET_ENV (or PLAYWRIGHT_BASE_URL): we cna keep current higher values when staging, and use shorter ones when local so runs stay fast. If it helps we can add a small helper in our Playwright utils and reuse it on the slow waits (URL waits, search-result loads, select asserts).

About seed script: Yes, we can make the timeouts env-aware: keep higher values when PLAYWRIGHT_TARGET_ENV=staging and shorter ones when local to keep runs fast. Also, the seed script already checks for existing records with db_session.get(.....) and catches IntegrityError, which matches our other existing insert patterns - so it’s in line with the rest of the code.

factories.OpportunityFactory.create(
has_long_descriptions=True, opportunity_id="6a483cd8-9169-418a-8dfb-60fa6e6f51e5"
)
# Check if it already exists before creating to allow running seed multiple times
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, and an important improvement on my solution. I think @chouinar may have another way of creating a record that will work around this issue though - might be nice to make sure we're using the same pattern everywhere

Copy link
Collaborator Author

@Bhavna-Ramachandran Bhavna-Ramachandran Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed there were errors in this part specifically, possibly due to parallel sessions running in different shards, so I updated this section to check if the opportunity already exists before creating it. This makes the seed script safe to run multiple times without causing duplicates or integrity errors. The has_long_descriptions=True record is still created for E2E tests if it doesn’t exist, but now it’s done safely, so concurrent runs or repeated seeds won’t fail.

We can check with @chouinar to see if his workaround helps avoiding conflicts or we could consider applying the same pattern across other seed data to keep everything consistent.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the rollback would work, you didn't start a transaction so it wouldn't know what to rollback to / once an integrity error is hit, the DB session won't let you use it anymore generally. Even if you setup a guardrail properly, you'd need to do that for a dozen more such static data values elsewhere in the script - this script isn't and can't reasonably be made safe to run in parallel, if you want that support, we're going to need to start from scratch on this.

At best you can check if it exists, and create it if it doesn't, anything more than that is not going to work in this setup.

Is it possible to run the seed script before splitting the tests into parallel batches?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently each shard sets up its own DB. Not saying that's a good thing, but in this case it should mean that we don't need to worry about the issue of records being recreated. Can we move forward with this if we replace the rollback call with an error log? If we're still seeing errors in that case we can revisit, but I don't logically see how that would be happening

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If each shard sets up its own DB, then we wouldn't need an error message, as that wouldn't be an error case. it would just be a "don't create it if it already exists" sort of check that we have a few dozen of already. That'd just be:

existing_e2e_opportunity_id = "..."
existing_e2e_opportunity = db_session.get(Opportunity, existing_e2e_opportunity_id)
if existing_e2e_opportunity is None:
    OpportunityFactory.create(opportunity_id=existing_e2e_opportunity_id, ...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants