Skip to content

Conversation

@kolipakakondal
Copy link
Collaborator

@kolipakakondal kolipakakondal commented Jul 18, 2025

Description

This PR adds the capability to capture and write OpenOCD debug output to a file, providing better debugging and logging capabilities

Fixes # (IEP-1536)

Type of change

  • Bug fix (non-breaking change which fixes an issue)

How has this been tested?

  • Open Debug configuraiton
  • Navigate to common tab and choose output file
  • Debug the application
  • Verify that output is written to a selected file

Scenarios verified

  • Tested with existing directory paths
  • Tested with non-existent directory paths (auto-creation)
  • Tested with explicit file paths
  • Tested append

Test Configuration:

  • ESP-IDF Version: master
  • OS (Windows,Linux and macOS): macOS

Dependent components impacted by this PR:

  • Debugging

Checklist

  • PR Self Reviewed
  • Applied Code formatting
  • Added Documentation
  • Added Unit Test
  • Verified on all platforms - Windows,Linux and macOS

Summary by CodeRabbit

  • New Features

    • Added the ability to optionally log process output streams to a file, with support for specifying the file name and append mode.
    • Introduced centralized management for log file writers to improve file handling and concurrency.
    • Enhanced documentation clarifying how directory paths for debug log files are handled, including automatic filename appending.
  • Bug Fixes

    • Improved error handling when retrieving output capture settings to prevent potential failures during process launch.

@coderabbitai
Copy link

coderabbitai bot commented Jul 18, 2025

Walkthrough

The changes introduce the ability to optionally capture process output to a file, with support for appending or overwriting. This involves updating constructors and logic in IdfRuntimeProcess, StreamsProxy, and StreamListener to handle output file paths and append flags, as well as robust error handling and file path resolution. Documentation was also updated to clarify handling of directory paths for log files. Additionally, a new utility class LogFileWriterManager was added to manage shared PrintWriter instances with thread-safe access and lifecycle management, accompanied by comprehensive unit tests.

Changes

File(s) Change Summary
.../process/IdfRuntimeProcess.java Refactored output capture logic in createStreamsProxy; added generic helper for safe attribute retrieval; now passes output file name and append flag to StreamsProxy. Added new private method and functional interface.
.../process/StreamListener.java Enhanced to support logging output to a file; updated constructor to accept output file and append flag; added file path resolution and error handling; updated stream processing to write to file if enabled.
.../process/monitors/StreamsProxy.java Updated constructor to accept output file and append flag; passes these to StreamListener. Added disposal call in kill().
docs/en/openocddebugging.rst Added note clarifying that directory paths for log files have "openocd.log" appended automatically.
.../core/logging/LogFileWriterManager.java Added new utility class to manage PrintWriter instances for log files with thread-safe creation, reuse, and disposal.
.../core/util/test/LogFileWriterManagerTest.java Added comprehensive JUnit tests for LogFileWriterManager covering writing, appending, concurrency, and lifecycle scenarios.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant IdfRuntimeProcess
    participant StreamsProxy
    participant StreamListener
    participant FileSystem

    User->>IdfRuntimeProcess: Launch process with config
    IdfRuntimeProcess->>IdfRuntimeProcess: Retrieve outputFile and append attributes
    IdfRuntimeProcess->>StreamsProxy: Create StreamsProxy(process, charset, label, outputFile, append)
    StreamsProxy->>StreamListener: Create StreamListener(..., outputFile, append)
    StreamListener->>FileSystem: Resolve and open outputFile (if provided)
    StreamListener->>StreamListener: Process stream, write output to file (if enabled)
Loading

Estimated code review effort

2 (~25 minutes)

Suggested labels

macos, arm64

Suggested reviewers

  • kolipakakondal
  • AndriiFilippov

Poem

In the warren of code, a new path appears,
Streams now can write where the bunny steers.
With files to append or to start anew,
Output hops safely, with logging in view.
A nibble of logic, a byte of delight—
Debugging grows brighter, from morning to night! 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (1)

225-227: Consider making the default filename configurable.

The hardcoded "openocd.log" filename is reasonable but could be made configurable through a constant or system property for better flexibility.

+	private static final String DEFAULT_OUTPUT_FILENAME = "openocd.log";
+	
 	private static String resolveOutputFilePath(String outputFile) {
 		// ... existing code ...
 		if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
 			// If it's a directory or ends with a separator, append openocd.log
-			file = new File(file, "openocd.log");
+			file = new File(file, DEFAULT_OUTPUT_FILENAME);
 		}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between af57f2d and 755a916.

📒 Files selected for processing (3)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java (2 hunks)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (4 hunks)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/Logger.java (1)
  • Logger (21-137)
🔇 Additional comments (15)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java (3)

65-65: LGTM: Constructor signature properly updated.

The constructor signature correctly adds the new outputFile and append parameters to support file output functionality.


72-72: LGTM: Parameters properly passed to StreamListener.

The new parameters are correctly passed to the StreamListener constructor, maintaining the proper flow of configuration data.


65-72: LGTM: Clean parameter delegation.

The constructor signature update properly delegates the new outputFile and append parameters to the StreamListener constructor. The implementation is clean and maintains the existing functionality while extending it.

bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java (5)

64-68: LGTM: Safe attribute retrieval with proper defaults.

The implementation correctly uses the getAttributeSafe helper method to retrieve configuration attributes with appropriate default values ("" for file path, false for append mode).


73-84: LGTM: Well-designed generic error handling.

The getAttributeSafe method with the functional interface provides a clean, reusable pattern for safely retrieving launch configuration attributes with proper exception handling and logging.


69-69: LGTM: Constructor call properly updated.

The StreamsProxy constructor is correctly updated to include the new outputFileName and append parameters, maintaining consistency with the updated class signature.


64-68: LGTM: Safe attribute retrieval with proper defaults.

The implementation correctly uses Eclipse Common tab attributes (IDebugUIConstants.ATTR_CAPTURE_IN_FILE and IDebugUIConstants.ATTR_APPEND_TO_FILE) with appropriate default values (empty string for file, false for append). This integrates well with the Eclipse debugging framework.


73-90: LGTM: Excellent generic helper method design.

The getAttributeSafe method with the AttributeGetter functional interface is a clean, reusable pattern for safely retrieving launch configuration attributes. The generic type parameter and proper exception handling with logging make this a robust solution.

bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (7)

59-73: LGTM: Constructor properly handles file initialization.

The constructor correctly initializes the file writer with proper error handling. The approach of resolving the path first and then creating the writer with append mode is sound.


131-133: LGTM: File writing integrated correctly.

The file writing logic is properly integrated into the stream processing with appropriate null checks. Using println ensures proper line termination.


208-212: LGTM: Proper resource cleanup.

The dispose method correctly closes the file writer and sets it to null, ensuring proper resource management and preventing resource leaks.


225-228: Default log filename “openocd.log” is safe to use
No other occurrences of “openocd.log” were found in the codebase, so there’s no risk of unexpected overrides or conflicts. This default choice aligns with common OpenOCD logging conventions and can be approved as-is.


59-73: LGTM: Proper file initialization with error handling.

The constructor correctly initializes the file writer with proper error handling. The file path resolution, writer creation with append mode, and exception logging are all handled appropriately.


131-133: LGTM: Clean file writing integration.

The file writing logic is properly integrated into the existing stream processing flow. The null checks ensure safe operation and the PrintWriter with auto-flush provides reliable output.


208-211: LGTM: Proper resource cleanup.

The dispose method correctly closes and nullifies the file writer, ensuring proper resource cleanup and preventing resource leaks.

Comment on lines 217 to 241
private static String resolveOutputFilePath(String outputFile) {
if (outputFile == null || outputFile.isEmpty()) {
return null;
}
try {
// Expand Eclipse variables (e.g., ${workspace_loc:...})
String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
File file = new File(expanded);
if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
// If it's a directory or ends with a separator, append openocd.log
file = new File(file, "openocd.log");
}
File parent = file.getParentFile();
if (parent != null && !parent.exists()) {
parent.mkdirs();
}
if (!file.exists()) {
file.createNewFile();
}
return file.getAbsolutePath();
} catch (Exception e) {
Logger.log(e);
return null;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Review file path resolution for security and robustness.

The path resolution logic handles variable expansion and directory creation, but there are some security and robustness concerns:

  1. Directory traversal vulnerability: The method doesn't validate against directory traversal attacks (e.g., ../../../etc/passwd).
  2. Broad exception handling: Catching all Exception types may hide specific issues.
  3. File creation without validation: No checks for file permissions or disk space.

Consider adding security validation:

 private static String resolveOutputFilePath(String outputFile) {
 	if (outputFile == null || outputFile.isEmpty()) {
 		return null;
 	}
 	try {
 		// Expand Eclipse variables (e.g., ${workspace_loc:...})
 		String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
+		
+		// Validate path to prevent directory traversal
+		File canonicalFile = new File(expanded).getCanonicalFile();
+		String canonicalPath = canonicalFile.getCanonicalPath();
+		
 		File file = new File(expanded);
 		if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
 			// If it's a directory or ends with a separator, append openocd.log
 			file = new File(file, "openocd.log");
 		}
+		
+		// Validate the final path is within expected bounds
+		File finalCanonical = file.getCanonicalFile();
+		
 		File parent = file.getParentFile();
 		if (parent != null && !parent.exists()) {
 			parent.mkdirs();
 		}
 		if (!file.exists()) {
 			file.createNewFile();
 		}
 		return file.getAbsolutePath();
-	} catch (Exception e) {
+	} catch (IOException | SecurityException e) {
 		Logger.log(e);
 		return null;
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private static String resolveOutputFilePath(String outputFile) {
if (outputFile == null || outputFile.isEmpty()) {
return null;
}
try {
// Expand Eclipse variables (e.g., ${workspace_loc:...})
String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
File file = new File(expanded);
if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
// If it's a directory or ends with a separator, append openocd.log
file = new File(file, "openocd.log");
}
File parent = file.getParentFile();
if (parent != null && !parent.exists()) {
parent.mkdirs();
}
if (!file.exists()) {
file.createNewFile();
}
return file.getAbsolutePath();
} catch (Exception e) {
Logger.log(e);
return null;
}
}
private static String resolveOutputFilePath(String outputFile) {
if (outputFile == null || outputFile.isEmpty()) {
return null;
}
try {
// Expand Eclipse variables (e.g., ${workspace_loc:...})
String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
// Validate path to prevent directory traversal
File canonicalFile = new File(expanded).getCanonicalFile();
String canonicalPath = canonicalFile.getCanonicalPath();
File file = new File(expanded);
if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
// If it's a directory or ends with a separator, append openocd.log
file = new File(file, "openocd.log");
}
// Validate the final path is within expected bounds
File finalCanonical = file.getCanonicalFile();
File parent = file.getParentFile();
if (parent != null && !parent.exists()) {
parent.mkdirs();
}
if (!file.exists()) {
file.createNewFile();
}
return file.getAbsolutePath();
} catch (IOException | SecurityException e) {
Logger.log(e);
return null;
}
}
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
between lines 217 and 241, enhance the resolveOutputFilePath method by adding
validation to prevent directory traversal attacks, such as normalizing the path
and ensuring it stays within an allowed base directory. Replace the broad
Exception catch with more specific exceptions to handle expected errors
explicitly. Before creating the file, add checks for write permissions and
available disk space to avoid failures or security issues.

💡 Verification agent

🧩 Analysis chain

Consider thread safety for concurrent file operations.

The resolveOutputFilePath method handles variable expansion, directory creation, and file creation well. However, consider potential race conditions if multiple processes attempt to create the same directory or file simultaneously.

Additionally, the method could benefit from more specific exception handling:

-		} catch (Exception e) {
+		} catch (IOException e) {
+			Logger.log(e);
+			return null;
+		} catch (CoreException e) {
+			Logger.log(e);
+			return null;
+		} catch (Exception e) {
 			Logger.log(e);
 			return null;
 		}

🏁 Script executed:

#!/bin/bash
# Description: Check if there are multiple instances of StreamListener that could create race conditions
# Expected: Find usage patterns to assess concurrent access risks

echo "=== Searching for StreamListener instantiation patterns ==="
rg -A 10 -B 5 "new StreamListener" 

echo "=== Checking for concurrent process creation ==="
rg -A 5 -B 5 "StreamsProxy.*new.*StreamListener"

echo "=== Verifying if launch configurations can be shared ==="
rg -A 10 "ILaunch.*getAttribute.*CAPTURE_IN_FILE"

Length of output: 3363


Refine exception handling and review concurrency in resolveOutputFilePath

  • In bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (lines 217–241), replace the broad catch block with more specific handlers:

    	private static String resolveOutputFilePath(String outputFile) {
    		…
    		try {
    			…
    		} catch (IOException e) {
    			Logger.log(e);
    			return null;
    		} catch (CoreException e) {
    			Logger.log(e);
    			return null;
    		} catch (Exception e) {
    			Logger.log(e);
    			return null;
    		}
    	}
  • Although OS‐level mkdirs() and createNewFile() are generally safe to call concurrently, if resolveOutputFilePath can be invoked in parallel (e.g., by multiple StreamListener instances in StreamsProxy.java), review whether you need stronger guarantees:

    • Use Files.createDirectories(parent.toPath()) and Files.createFile(path) for atomic directory/file creation.
    • Synchronize access to this method or employ a file lock when targeting the same output path across threads.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private static String resolveOutputFilePath(String outputFile) {
if (outputFile == null || outputFile.isEmpty()) {
return null;
}
try {
// Expand Eclipse variables (e.g., ${workspace_loc:...})
String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
File file = new File(expanded);
if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
// If it's a directory or ends with a separator, append openocd.log
file = new File(file, "openocd.log");
}
File parent = file.getParentFile();
if (parent != null && !parent.exists()) {
parent.mkdirs();
}
if (!file.exists()) {
file.createNewFile();
}
return file.getAbsolutePath();
} catch (Exception e) {
Logger.log(e);
return null;
}
}
private static String resolveOutputFilePath(String outputFile) {
if (outputFile == null || outputFile.isEmpty()) {
return null;
}
try {
// Expand Eclipse variables (e.g., ${workspace_loc:...})
String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
File file = new File(expanded);
if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
// If it's a directory or ends with a separator, append openocd.log
file = new File(file, "openocd.log");
}
File parent = file.getParentFile();
if (parent != null && !parent.exists()) {
parent.mkdirs();
}
if (!file.exists()) {
file.createNewFile();
}
return file.getAbsolutePath();
} catch (IOException e) {
Logger.log(e);
return null;
} catch (CoreException e) {
Logger.log(e);
return null;
} catch (Exception e) {
Logger.log(e);
return null;
}
}
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
around lines 217 to 241, replace the broad Exception catch with specific
exception types to improve error handling clarity. Change directory and file
creation calls to use Files.createDirectories(parent.toPath()) and
Files.createFile(path) for atomic operations. Additionally, if this method can
be called concurrently for the same output path, add synchronization or file
locking to prevent race conditions during directory and file creation.

Copy link
Collaborator

@sigmaaa sigmaaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @kolipakakondal , thanks for the PR — LGTM overall! I’ve added a few suggestions on how we can make the StreamListener changes a bit more optimized by avoiding some conditional checks and using autoclosable to improve maintainability.

@sigmaaa
Copy link
Collaborator

sigmaaa commented Jul 21, 2025

Pushed a commit with minor improvements to the StreamListener class: made fileWriter autoclosable, avoided conditional checks by using nullWriter, and applied formatting.

@AndriiFilippov
Copy link
Collaborator

AndriiFilippov commented Jul 21, 2025

@kolipakakondal

Tested under:
OS - Windows 11 / Mac arm64 / Linux Ubuntu 24.04
ESP-IDF: v5.4.2
Boards: H2 / S3

Windows 11: everything works fine, I only have a few follow-up questions:

  1. If I want to delete a file after the debug process has stopped, I can't do it because the pop-up says that the file is still occupied by an external process. There might be something wrong here, as I checked Task Manager for any active OpenOCD or GDB processes and I don't see any processes that could be preventing me from deleting the file after the debug process is finished.
image

the logs for case when I run and stop debug process and trying to delete log file then:
log.txt

  1. I like the logic of case when user select the workspace to store log file, in this case the generated file has .log extension at the end,
image image

but in case of File System... selection the log file has just name and Type: File
image
also there is not option to select a type:
image
wouldn't it be more clear to add .log extension to any file selected, to make same structure for both cases?

Append option works as expected ✅

Mac arm64 / Linux Ubuntu 24.04

  1. The last line of the debug session output is the only one being written to the log file.
    the logs for case when open openocd.log file after debug is stopped:
    log_mac_arm64.txt
Screenshot 2025-07-21 at 17 27 54

but with Append option selected - the openocd.log file contains full debug logs and everything works as expected ✅

The documentation is clear enough. ✅

@sigmaaa
Copy link
Collaborator

sigmaaa commented Jul 23, 2025

@kolipakakondal

Tested under: OS - Windows 11 / Mac arm64 / Linux Ubuntu 24.04 ESP-IDF: v5.4.2 Boards: H2 / S3

Windows 11: everything works fine, I only have a few follow-up questions:

  1. If I want to delete a file after the debug process has stopped, I can't do it because the pop-up says that the file is still occupied by an external process. There might be something wrong here, as I checked Task Manager for any active OpenOCD or GDB processes and I don't see any processes that could be preventing me from deleting the file after the debug process is finished.
image the logs for case when I run and stop debug process and trying to delete log file then: [log.txt](https://github.com/user-attachments/files/21350105/log.txt)
  1. I like the logic of case when user select the workspace to store log file, in this case the generated file has .log extension at the end,

image image
but in case of File System... selection the log file has just name and Type: File image also there is not option to select a type: image wouldn't it be more clear to add .log extension to any file selected, to make same structure for both cases?

Append option works as expected ✅

Mac arm64 / Linux Ubuntu 24.04

  1. The last line of the debug session output is the only one being written to the log file.
    the logs for case when open openocd.log file after debug is stopped:
    log_mac_arm64.txt
Screenshot 2025-07-21 at 17 27 54 but with `Append` option selected - the openocd.log file contains full debug logs and everything works as expected ✅

The documentation is clear enough. ✅

The first issue is fixed.
The second one can be fixed, but it would require rewriting a lot of the source code, and I don't think it's worth it, since it's the general Eclipse tab behavior—it works the same way in other configurations that use the common tab.

The last line of the debug session output is the only one being written to the log file.
Here are the logs for the case when I open the openocd.log file after the debug session is stopped:
log_mac_arm64.txt

I'm currently working on this issue. After taking a second look at the code, I noticed it's not optimized—we're recreating the FileWriter on each stream update.
Moreover, the dispose and kill methods are not being triggered after the process is terminated.

@sigmaaa
Copy link
Collaborator

sigmaaa commented Jul 23, 2025

Hi @kolipakakondal, @alirana01, @AndriiFilippov,
I’ve added a commit that optimizes the logic to avoid creating multiple FileWriter instances and ensures correct behavior when appending is disabled. Please take a look

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (8)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (6)

19-19: Consider using ConcurrentHashMap with bounded size.

The static map could potentially grow indefinitely if many different file paths are used without proper cleanup, leading to memory leaks.

Consider implementing a bounded cache or adding monitoring for map size:

-private static final Map<String, PrintWriter> writers = new ConcurrentHashMap<>();
+private static final Map<String, PrintWriter> writers = new ConcurrentHashMap<>();
+// Consider adding: private static final int MAX_WRITERS = 100;

41-44: Remove redundant file existence check.

The file.exists() check is unnecessary since createNewFile() is atomic and returns false if the file already exists.

-if (!file.exists())
-{
-    file.createNewFile();
-}
+file.createNewFile(); // Returns false if file already exists, which is fine

47-51: Consider adding debug logging for fallback to null writer.

While the exception handling is correct, it might be helpful to log when falling back to null writer for debugging purposes.

catch (IOException e)
{
    Logger.log(e);
+   // Consider adding debug log: "Falling back to null writer for path: " + p
    return new PrintWriter(Writer.nullWriter());
}

62-62: Consider making session end message localizable.

The session end message is hardcoded in English. Consider using a localizable message for better internationalization support.

-writer.println("=== Session ended at " + LocalDateTime.now() + " ==="); //$NON-NLS-1$ //$NON-NLS-2$
+// Consider using Messages.getString() or similar localization mechanism
+writer.println("=== Session ended at " + LocalDateTime.now() + " ==="); //$NON-NLS-1$ //$NON-NLS-2$

67-74: Consider consistency with session end timestamps.

The closeAll() method doesn't add session end timestamps like closeWriter() does. This inconsistency might be intentional for cleanup scenarios, but should be documented or made consistent.

public static void closeAll()
{
    for (PrintWriter writer : writers.values())
    {
+       // Consider adding session end timestamp for consistency:
+       // writer.println("=== Session ended at " + LocalDateTime.now() + " ===");
        writer.close();
    }
    writers.clear();
}

67-75: Potential resource leak in closeAll method.

While the method closes all writers, it doesn't write session end messages like closeWriter does. This inconsistency could make logs harder to analyze.

Consider adding session end messages for consistency:

public static void closeAll()
{
    for (PrintWriter writer : writers.values())
    {
+       writer.println("=== Session ended at " + LocalDateTime.now() + " ===");
        writer.close();
    }
    writers.clear();
}
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java (2)

152-155: Good use of synchronization, but consider more comprehensive verification.

The synchronized block correctly ensures thread-safe writes to the shared PrintWriter instance. However, the test could be more thorough in verifying concurrent behavior.

Consider adding verification for the total number of lines written:

String content = Files.readString(logFile);
assertTrue(content.contains("Line 0")); //$NON-NLS-1$
assertTrue(content.contains("Line 49")); //$NON-NLS-1$
+// Verify total lines written (5 tasks × 50 lines each = 250 lines)
+long lineCount = content.lines().count();
+assertTrue(lineCount >= 250, "Expected at least 250 lines, got " + lineCount);

142-177: Consider adding tests for additional edge cases.

The current test coverage is good, but consider adding tests for:

  1. Invalid file paths (e.g., paths with invalid characters)
  2. Directory creation scenarios (verify parent directories are created)
  3. Session end timestamp verification (check that closeWriter adds the timestamp)

Example test for directory creation:

@Test
void testDirectoryCreation() throws IOException {
    Path nestedPath = tempDir.resolve("nested/dir/test.log");
    PrintWriter writer = LogFileWriterManager.getWriter(nestedPath.toString(), false);
    writer.println("Test content");
    writer.flush();
    
    assertTrue(Files.exists(nestedPath));
    assertTrue(Files.exists(nestedPath.getParent()));
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9cd8d24 and 39609fc.

📒 Files selected for processing (5)
  • bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (1 hunks)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java (2 hunks)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (8 hunks)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java (3 hunks)
  • tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java (1 hunks)
🧬 Code Graph Analysis (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/Logger.java (1)
  • Logger (21-137)
🚧 Files skipped from review as they are similar to previous changes (3)
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java
  • bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
🧰 Additional context used
🧬 Code Graph Analysis (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/Logger.java (1)
  • Logger (21-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build_macos
  • GitHub Check: build
🔇 Additional comments (7)
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java (5)

25-43: Well-structured test class with proper lifecycle management.

The test class follows good practices with temporary directory management, proper setup/teardown, and comprehensive test coverage.


25-43: Well-structured test class with proper setup.

Good use of JUnit 5 features like @TempDir for test isolation and proper cleanup in tearDown() method to prevent test interference.


106-112: Test exposes the append parameter caching issue.

This test verifies that the same writer instance is returned for the same path, but it doesn't consider that both calls use append=true. This test would fail to catch the issue I identified in the main class where different append values for the same path return the same writer.

Add a test to verify append parameter behavior:

@Test
void testDifferentAppendModesForSamePath()
{
    PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), true);
    PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
    
    // These should be different instances since append modes differ
    assertNotSame(writer1, writer2, "Writers with different append modes should be different instances");
}

142-178: Excellent thread safety test with proper synchronization.

The concurrent access test is well-designed and properly uses synchronization on the writer object to prevent race conditions during writing. The test validates that concurrent access works correctly.


93-104: Good coverage of null and empty path handling.

The test properly verifies that null and empty paths return non-null writers that don't throw exceptions when used, which matches the nullWriter behavior expected.

bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (2)

17-24: Well-designed utility class with proper encapsulation.

The class follows the utility pattern correctly with a private constructor to prevent instantiation and uses a thread-safe ConcurrentHashMap for managing writers.


55-65: Excellent session logging in closeWriter method.

The addition of a session end timestamp when closing writers is a thoughtful touch that will help with debugging and log analysis.

Comment on lines +25 to +53
public static PrintWriter getWriter(String path, boolean append)
{
if (path == null || path.isEmpty())
{
return new PrintWriter(Writer.nullWriter());
}

return writers.computeIfAbsent(path, p -> {
try
{
File file = new File(p);
File parent = file.getParentFile();
if (parent != null && !parent.exists())
{
parent.mkdirs();
}
if (!file.exists())
{
file.createNewFile();
}
return new PrintWriter(new BufferedWriter(new FileWriter(file, append)), true);
}
catch (IOException e)
{
Logger.log(e);
return new PrintWriter(Writer.nullWriter());
}
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix append mode handling for existing writers.

The current implementation ignores the append parameter for existing writers. If a writer already exists for a path, the append mode from the first call is used for all subsequent calls, which could lead to unexpected behavior.

Consider this scenario:

  1. First call: getWriter("file.log", false) - creates writer in overwrite mode
  2. Second call: getWriter("file.log", true) - returns same writer, still in overwrite mode

Apply this refactor to include append mode in the key:

-return writers.computeIfAbsent(path, p -> {
+String key = path + ":" + append;
+return writers.computeIfAbsent(key, k -> {

Also update the closeWriter method accordingly:

-public static void closeWriter(String path)
+public static void closeWriter(String path, boolean append)
{
-    if (path == null || path.isEmpty())
-        return;
-    PrintWriter writer = writers.remove(path);
+    if (path == null || path.isEmpty())
+        return;
+    String key = path + ":" + append;
+    PrintWriter writer = writers.remove(key);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java
between lines 25 and 53, the getWriter method currently uses only the file path
as the key in the writers map, ignoring the append parameter for existing
writers. To fix this, modify the key to combine both the path and append mode
(e.g., a composite key or a string concatenation) so that writers with different
append modes are stored separately. Also update the closeWriter method to accept
and use this new key format to correctly close the intended writer.

⚠️ Potential issue

Critical issue: Append parameter is ignored for existing writers.

The computeIfAbsent method only creates a new writer if the path doesn't exist in the map. However, the append parameter is only considered during writer creation, not when retrieving an existing writer. This means:

  1. If a writer is requested with append=false, it's created and cached
  2. Subsequent requests with append=true for the same path will return the same writer (which was created with append=false)

This could lead to unexpected behavior where the append mode doesn't match the caller's intention.

Consider this approach to ensure append behavior is consistent:

-public static PrintWriter getWriter(String path, boolean append)
+public static PrintWriter getWriter(String path, boolean append)
{
    if (path == null || path.isEmpty())
    {
        return new PrintWriter(Writer.nullWriter());
    }

-   return writers.computeIfAbsent(path, p -> {
+   // Create a composite key that includes append mode
+   String key = path + "|append=" + append;
+   return writers.computeIfAbsent(key, k -> {
        try
        {
-           File file = new File(p);
+           File file = new File(path);
            File parent = file.getParentFile();
            if (parent != null && !parent.exists())
            {
                parent.mkdirs();
            }
            if (!file.exists())
            {
                file.createNewFile();
            }
            return new PrintWriter(new BufferedWriter(new FileWriter(file, append)), true);
        }
        catch (IOException e)
        {
            Logger.log(e);
            return new PrintWriter(Writer.nullWriter());
        }
    });
}

And update the closeWriter method accordingly:

public static void closeWriter(String path)
{
    if (path == null || path.isEmpty())
        return;
-   PrintWriter writer = writers.remove(path);
+   // Remove both append modes for the path
+   PrintWriter writer1 = writers.remove(path + "|append=true");
+   PrintWriter writer2 = writers.remove(path + "|append=false");
+   PrintWriter writer = writer1 != null ? writer1 : writer2;
    if (writer != null)
    {
        writer.println("=== Session ended at " + LocalDateTime.now() + " ===");
        writer.close();
    }
}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java
between lines 25 and 53, the getWriter method ignores the append parameter for
existing writers because it caches writers by path only. To fix this, modify the
caching key to include both path and append mode (e.g., use a composite key or a
map keyed by path and append flag) so that writers with different append modes
are stored separately. Update the closeWriter method to remove the writer from
the cache using the same composite key to ensure proper cleanup.

File parent = file.getParentFile();
if (parent != null && !parent.exists())
{
parent.mkdirs();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential security issue: Directory creation without validation.

The mkdirs() call could create directories in unintended locations if the path is malicious or contains path traversal sequences (e.g., ../../../etc/passwd).

Add path validation before creating directories:

if (parent != null && !parent.exists())
{
+   // Validate parent path to prevent directory traversal attacks
+   String canonicalParent = parent.getCanonicalPath();
+   String expectedParent = parent.getAbsolutePath();
+   if (!canonicalParent.equals(expectedParent))
+   {
+       throw new IOException("Invalid path detected: " + path);
+   }
    parent.mkdirs();
}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java
at line 39, the call to parent.mkdirs() creates directories without validating
the path, which can lead to security risks like path traversal. Before calling
mkdirs(), add validation to ensure the directory path is within an allowed base
directory and does not contain any traversal sequences or invalid characters.
Reject or sanitize paths that do not meet these criteria to prevent creating
directories in unintended locations.

Comment on lines +72 to +90
@Test
void testAppendModeFalseCreatesNewFile() throws IOException
{
PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer1.println("Initial Line"); //$NON-NLS-1$
writer1.flush();

// Manually close and remove
LogFileWriterManager.closeWriter(logFile.toString());

PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer2.println("New Line"); //$NON-NLS-1$
writer2.flush();

String content = Files.readString(logFile);
assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$
// NOTE: This test doesn't guarantee truncation unless we reimplement logic
// to forcibly truncate when append=false
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Test reveals the append mode issue in LogFileWriterManager.

This test acknowledges that truncation behavior isn't guaranteed when append=false, which confirms the issue I identified in the main class where append mode is only applied during writer creation.

The weak assertion on line 87 masks the actual problem:

-assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$
-// NOTE: This test doesn't guarantee truncation unless we reimplement logic
-// to forcibly truncate when append=false
+// This test should verify proper truncation behavior
+assertTrue(content.contains("New Line")); //$NON-NLS-1$
+assertFalse(content.contains("Initial Line")); //$NON-NLS-1$

However, this test will fail until the append mode handling is fixed in LogFileWriterManager.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Test
void testAppendModeFalseCreatesNewFile() throws IOException
{
PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer1.println("Initial Line"); //$NON-NLS-1$
writer1.flush();
// Manually close and remove
LogFileWriterManager.closeWriter(logFile.toString());
PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer2.println("New Line"); //$NON-NLS-1$
writer2.flush();
String content = Files.readString(logFile);
assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$
// NOTE: This test doesn't guarantee truncation unless we reimplement logic
// to forcibly truncate when append=false
}
@Test
void testAppendModeFalseCreatesNewFile() throws IOException
{
PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer1.println("Initial Line"); //$NON-NLS-1$
writer1.flush();
// Manually close and remove
LogFileWriterManager.closeWriter(logFile.toString());
PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer2.println("New Line"); //$NON-NLS-1$
writer2.flush();
String content = Files.readString(logFile);
// This test should verify proper truncation behavior
assertTrue(content.contains("New Line")); //$NON-NLS-1$
assertFalse(content.contains("Initial Line")); //$NON-NLS-1$
}
🤖 Prompt for AI Agents
In
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java
around lines 72 to 90, the test does not guarantee that the file is truncated
when append mode is false, masking the issue in LogFileWriterManager where
append mode is only applied during writer creation. To fix this, update
LogFileWriterManager to ensure that when append is false, the file is properly
truncated or overwritten before creating the writer, so the test can reliably
verify that old content is removed and only new content is written.

⚠️ Potential issue

Test logic issue: Append mode behavior not properly tested.

The test testAppendModeFalseCreatesNewFile has a misleading comment and doesn't actually test what it claims. The test:

  1. Creates a writer with append=false
  2. Closes the writer manually
  3. Creates a new writer with append=false
  4. Checks if content contains either line (not guaranteeing overwrite behavior)

This doesn't test the actual append=false behavior because the same writer instance is being reused due to the caching mechanism.

The test should verify that append=false actually overwrites the file:

@Test
-void testAppendModeFalseCreatesNewFile() throws IOException
+void testAppendModeFalseOverwritesFile() throws IOException
{
-   PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false);
+   // First, write some content with append=true
+   PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), true);
    writer1.println("Initial Line");
    writer1.flush();
+   LogFileWriterManager.closeWriter(logFile.toString());

-   // Manually close and remove
-   LogFileWriterManager.closeWriter(logFile.toString());
-
-   PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
+   // Now create a writer with append=false - should overwrite
+   PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
    writer2.println("New Line");
    writer2.flush();

    String content = Files.readString(logFile);
-   assertTrue(content.contains("Initial Line") || content.contains("New Line"));
-   // NOTE: This test doesn't guarantee truncation unless we reimplement logic
-   // to forcibly truncate when append=false
+   assertTrue(content.contains("New Line"));
+   assertFalse(content.contains("Initial Line"), "File should be overwritten, not appended to");
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Test
void testAppendModeFalseCreatesNewFile() throws IOException
{
PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer1.println("Initial Line"); //$NON-NLS-1$
writer1.flush();
// Manually close and remove
LogFileWriterManager.closeWriter(logFile.toString());
PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer2.println("New Line"); //$NON-NLS-1$
writer2.flush();
String content = Files.readString(logFile);
assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$
// NOTE: This test doesn't guarantee truncation unless we reimplement logic
// to forcibly truncate when append=false
}
@Test
void testAppendModeFalseOverwritesFile() throws IOException
{
// First, write some content with append=true
PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), true);
writer1.println("Initial Line");
writer1.flush();
LogFileWriterManager.closeWriter(logFile.toString());
// Now create a writer with append=false - should overwrite
PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer2.println("New Line");
writer2.flush();
String content = Files.readString(logFile);
assertTrue(content.contains("New Line"));
assertFalse(content.contains("Initial Line"), "File should be overwritten, not appended to");
}
🤖 Prompt for AI Agents
In
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java
around lines 72 to 90, the testAppendModeFalseCreatesNewFile test does not
properly verify that append=false overwrites the file because it reuses the
cached writer instance. To fix this, modify the test to ensure the file is
truncated by either clearing the cache or deleting the file before creating the
second writer with append=false, then assert that the file content only contains
the new line and not the initial line, confirming overwrite behavior.

@AndriiFilippov
Copy link
Collaborator

@sigmaaa hi !

Tested under:
OS - Windows 11 / Mac arm64
ESP-IDF: v5.4.2
Board: H2

able to delete .log file ✅
logs have been written to log file with Append option disabled ✅

LGTM 👍

@AndriiFilippov
Copy link
Collaborator

@kolipakakondal @alirana01 hi!
Please, review

Copy link
Collaborator Author

@kolipakakondal kolipakakondal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sigmaaa for improving this PR and addressing the PR comments. LGTM

@kolipakakondal kolipakakondal added this to the v3.5.1 milestone Jul 24, 2025
@kolipakakondal kolipakakondal merged commit ae42e7d into master Jul 24, 2025
3 of 6 checks passed
@kolipakakondal kolipakakondal deleted the IEP-1536 branch July 24, 2025 08:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants