-
Notifications
You must be signed in to change notification settings - Fork 1
fix flaky tests because of ZTestLogger #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Purpose: Replace ZTestLogger with a custom TestLogger to avoid duplicated log lines causing test flakiness, and explicitly mark one still-flaky test while adding a TODO.
- Introduces a simple concurrent TestLogger for capturing log messages.
- Replaces ZTestLogger usage in two tests with the new TestLogger.
- Marks a previously flaky test with TestAspect.flaky and adds a clarifying TODO comment.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| class TestLogger extends ZLogger[String, Unit] { | ||
| val messages: ConcurrentLinkedQueue[String] = new ConcurrentLinkedQueue() | ||
| override def apply(trace: Trace, fiberId: FiberId, logLevel: LogLevel, message: () => String, cause: Cause[Any], context: FiberRefs, spans: List[LogSpan], annotations: Map[String, String]): Unit = | ||
| messages.add(message()) |
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apply is declared to return Unit, but the body evaluates to Boolean (ConcurrentLinkedQueue.add returns Boolean), causing a type mismatch in Scala 3. Discard the Boolean result explicitly, e.g.: override def apply(...): Unit = { messages.add(message()); () }.
| messages.add(message()) | |
| { messages.add(message()); () } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
incorrect, check again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 3 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
|
|
||
| readResult1 <- r1.readState | ||
|
|
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Counting log messages directly from a ConcurrentLinkedQueue immediately after readState can race with late asynchronous log emissions, potentially reintroducing flakiness (only the first test is marked flaky). Consider synchronizing by awaiting completion of pending fibers or introducing a short drain/settle step (e.g., ZIO.sleep(...)) before snapshotting messages, or capturing log entries with a timestamp and filtering only those emitted before readState completes.
| // Allow time for any late log emissions to settle before snapshotting messages | |
| _ <- ZIO.sleep(100.millis) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a very nice point, since I tested with nonFlaky I think we're good for now
| messages = testLogger.getMessages | ||
| pendingHeartbeatLogCount = messages.count(_.contains("memberId=MemberId(peer1) read pending heartbeat")) | ||
| pendingCommandLogCount = messages.count(_.contains("memberId=MemberId(peer1) read pending command")) |
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Counting log messages directly from a ConcurrentLinkedQueue immediately after readState can race with late asynchronous log emissions, potentially reintroducing flakiness (only the first test is marked flaky). Consider synchronizing by awaiting completion of pending fibers or introducing a short drain/settle step (e.g., ZIO.sleep(...)) before snapshotting messages, or capturing log entries with a timestamp and filtering only those emitted before readState completes.
| class TestLogger extends ZLogger[String, Unit] { | ||
| val messages: ConcurrentLinkedQueue[String] = new ConcurrentLinkedQueue() | ||
| override def apply(trace: Trace, fiberId: FiberId, logLevel: LogLevel, message: () => String, cause: Cause[Any], context: FiberRefs, spans: List[LogSpan], annotations: Map[String, String]): Unit = | ||
| messages.add(message()) | ||
|
|
||
| def getMessages: List[String] = messages.asScala.toList |
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The custom TestLogger discards useful metadata (logLevel, cause, spans, annotations), which can hinder future debugging or more granular assertions. Consider storing a structured case class (e.g., with level and cause) instead of only message(), or at least prefix messages with logLevel to preserve minimal context.
| class TestLogger extends ZLogger[String, Unit] { | |
| val messages: ConcurrentLinkedQueue[String] = new ConcurrentLinkedQueue() | |
| override def apply(trace: Trace, fiberId: FiberId, logLevel: LogLevel, message: () => String, cause: Cause[Any], context: FiberRefs, spans: List[LogSpan], annotations: Map[String, String]): Unit = | |
| messages.add(message()) | |
| def getMessages: List[String] = messages.asScala.toList | |
| case class TestLogEntry( | |
| trace: Trace, | |
| fiberId: FiberId, | |
| logLevel: LogLevel, | |
| message: String, | |
| cause: Cause[Any], | |
| context: FiberRefs, | |
| spans: List[LogSpan], | |
| annotations: Map[String, String] | |
| ) | |
| class TestLogger extends ZLogger[String, Unit] { | |
| val entries: ConcurrentLinkedQueue[TestLogEntry] = new ConcurrentLinkedQueue() | |
| override def apply( | |
| trace: Trace, | |
| fiberId: FiberId, | |
| logLevel: LogLevel, | |
| message: () => String, | |
| cause: Cause[Any], | |
| context: FiberRefs, | |
| spans: List[LogSpan], | |
| annotations: Map[String, String] | |
| ): Unit = | |
| entries.add( | |
| TestLogEntry( | |
| trace, | |
| fiberId, | |
| logLevel, | |
| message(), | |
| cause, | |
| context, | |
| spans, | |
| annotations | |
| ) | |
| ) | |
| def getEntries: List[TestLogEntry] = entries.asScala.toList | |
| def getMessages: List[String] = entries.asScala.toList.map(_.message) |
One test was fixed by moving away from ZTestLogger, as ZTestLogger can cause duplicated log lines.
also identified why another test is flaky and added a TODO comment and TestAspect.flaky, until we find a better way to test it