-
Notifications
You must be signed in to change notification settings - Fork 133
fix: write openocd debug output to a file #1274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes introduce the ability to optionally capture process output to a file, with support for appending or overwriting. This involves updating constructors and logic in Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant IdfRuntimeProcess
participant StreamsProxy
participant StreamListener
participant FileSystem
User->>IdfRuntimeProcess: Launch process with config
IdfRuntimeProcess->>IdfRuntimeProcess: Retrieve outputFile and append attributes
IdfRuntimeProcess->>StreamsProxy: Create StreamsProxy(process, charset, label, outputFile, append)
StreamsProxy->>StreamListener: Create StreamListener(..., outputFile, append)
StreamListener->>FileSystem: Resolve and open outputFile (if provided)
StreamListener->>StreamListener: Process stream, write output to file (if enabled)
Estimated code review effort2 (~25 minutes) Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (1)
225-227: Consider making the default filename configurable.The hardcoded "openocd.log" filename is reasonable but could be made configurable through a constant or system property for better flexibility.
+ private static final String DEFAULT_OUTPUT_FILENAME = "openocd.log"; + private static String resolveOutputFilePath(String outputFile) { // ... existing code ... if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) { // If it's a directory or ends with a separator, append openocd.log - file = new File(file, "openocd.log"); + file = new File(file, DEFAULT_OUTPUT_FILENAME); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java(2 hunks)bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java(4 hunks)bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/Logger.java (1)
Logger(21-137)
🔇 Additional comments (15)
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java (3)
65-65: LGTM: Constructor signature properly updated.The constructor signature correctly adds the new
outputFileandappendparameters to support file output functionality.
72-72: LGTM: Parameters properly passed to StreamListener.The new parameters are correctly passed to the StreamListener constructor, maintaining the proper flow of configuration data.
65-72: LGTM: Clean parameter delegation.The constructor signature update properly delegates the new
outputFileandappendparameters to theStreamListenerconstructor. The implementation is clean and maintains the existing functionality while extending it.bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java (5)
64-68: LGTM: Safe attribute retrieval with proper defaults.The implementation correctly uses the
getAttributeSafehelper method to retrieve configuration attributes with appropriate default values (""for file path,falsefor append mode).
73-84: LGTM: Well-designed generic error handling.The
getAttributeSafemethod with the functional interface provides a clean, reusable pattern for safely retrieving launch configuration attributes with proper exception handling and logging.
69-69: LGTM: Constructor call properly updated.The StreamsProxy constructor is correctly updated to include the new
outputFileNameandappendparameters, maintaining consistency with the updated class signature.
64-68: LGTM: Safe attribute retrieval with proper defaults.The implementation correctly uses Eclipse Common tab attributes (
IDebugUIConstants.ATTR_CAPTURE_IN_FILEandIDebugUIConstants.ATTR_APPEND_TO_FILE) with appropriate default values (empty string for file, false for append). This integrates well with the Eclipse debugging framework.
73-90: LGTM: Excellent generic helper method design.The
getAttributeSafemethod with theAttributeGetterfunctional interface is a clean, reusable pattern for safely retrieving launch configuration attributes. The generic type parameter and proper exception handling with logging make this a robust solution.bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java (7)
59-73: LGTM: Constructor properly handles file initialization.The constructor correctly initializes the file writer with proper error handling. The approach of resolving the path first and then creating the writer with append mode is sound.
131-133: LGTM: File writing integrated correctly.The file writing logic is properly integrated into the stream processing with appropriate null checks. Using
printlnensures proper line termination.
208-212: LGTM: Proper resource cleanup.The dispose method correctly closes the file writer and sets it to null, ensuring proper resource management and preventing resource leaks.
225-228: Default log filename “openocd.log” is safe to use
No other occurrences of “openocd.log” were found in the codebase, so there’s no risk of unexpected overrides or conflicts. This default choice aligns with common OpenOCD logging conventions and can be approved as-is.
59-73: LGTM: Proper file initialization with error handling.The constructor correctly initializes the file writer with proper error handling. The file path resolution, writer creation with append mode, and exception logging are all handled appropriately.
131-133: LGTM: Clean file writing integration.The file writing logic is properly integrated into the existing stream processing flow. The null checks ensure safe operation and the
PrintWriterwith auto-flush provides reliable output.
208-211: LGTM: Proper resource cleanup.The dispose method correctly closes and nullifies the file writer, ensuring proper resource cleanup and preventing resource leaks.
| private static String resolveOutputFilePath(String outputFile) { | ||
| if (outputFile == null || outputFile.isEmpty()) { | ||
| return null; | ||
| } | ||
| try { | ||
| // Expand Eclipse variables (e.g., ${workspace_loc:...}) | ||
| String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile); | ||
| File file = new File(expanded); | ||
| if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) { | ||
| // If it's a directory or ends with a separator, append openocd.log | ||
| file = new File(file, "openocd.log"); | ||
| } | ||
| File parent = file.getParentFile(); | ||
| if (parent != null && !parent.exists()) { | ||
| parent.mkdirs(); | ||
| } | ||
| if (!file.exists()) { | ||
| file.createNewFile(); | ||
| } | ||
| return file.getAbsolutePath(); | ||
| } catch (Exception e) { | ||
| Logger.log(e); | ||
| return null; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Review file path resolution for security and robustness.
The path resolution logic handles variable expansion and directory creation, but there are some security and robustness concerns:
- Directory traversal vulnerability: The method doesn't validate against directory traversal attacks (e.g.,
../../../etc/passwd). - Broad exception handling: Catching all
Exceptiontypes may hide specific issues. - File creation without validation: No checks for file permissions or disk space.
Consider adding security validation:
private static String resolveOutputFilePath(String outputFile) {
if (outputFile == null || outputFile.isEmpty()) {
return null;
}
try {
// Expand Eclipse variables (e.g., ${workspace_loc:...})
String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile);
+
+ // Validate path to prevent directory traversal
+ File canonicalFile = new File(expanded).getCanonicalFile();
+ String canonicalPath = canonicalFile.getCanonicalPath();
+
File file = new File(expanded);
if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) {
// If it's a directory or ends with a separator, append openocd.log
file = new File(file, "openocd.log");
}
+
+ // Validate the final path is within expected bounds
+ File finalCanonical = file.getCanonicalFile();
+
File parent = file.getParentFile();
if (parent != null && !parent.exists()) {
parent.mkdirs();
}
if (!file.exists()) {
file.createNewFile();
}
return file.getAbsolutePath();
- } catch (Exception e) {
+ } catch (IOException | SecurityException e) {
Logger.log(e);
return null;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private static String resolveOutputFilePath(String outputFile) { | |
| if (outputFile == null || outputFile.isEmpty()) { | |
| return null; | |
| } | |
| try { | |
| // Expand Eclipse variables (e.g., ${workspace_loc:...}) | |
| String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile); | |
| File file = new File(expanded); | |
| if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) { | |
| // If it's a directory or ends with a separator, append openocd.log | |
| file = new File(file, "openocd.log"); | |
| } | |
| File parent = file.getParentFile(); | |
| if (parent != null && !parent.exists()) { | |
| parent.mkdirs(); | |
| } | |
| if (!file.exists()) { | |
| file.createNewFile(); | |
| } | |
| return file.getAbsolutePath(); | |
| } catch (Exception e) { | |
| Logger.log(e); | |
| return null; | |
| } | |
| } | |
| private static String resolveOutputFilePath(String outputFile) { | |
| if (outputFile == null || outputFile.isEmpty()) { | |
| return null; | |
| } | |
| try { | |
| // Expand Eclipse variables (e.g., ${workspace_loc:...}) | |
| String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile); | |
| // Validate path to prevent directory traversal | |
| File canonicalFile = new File(expanded).getCanonicalFile(); | |
| String canonicalPath = canonicalFile.getCanonicalPath(); | |
| File file = new File(expanded); | |
| if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) { | |
| // If it's a directory or ends with a separator, append openocd.log | |
| file = new File(file, "openocd.log"); | |
| } | |
| // Validate the final path is within expected bounds | |
| File finalCanonical = file.getCanonicalFile(); | |
| File parent = file.getParentFile(); | |
| if (parent != null && !parent.exists()) { | |
| parent.mkdirs(); | |
| } | |
| if (!file.exists()) { | |
| file.createNewFile(); | |
| } | |
| return file.getAbsolutePath(); | |
| } catch (IOException | SecurityException e) { | |
| Logger.log(e); | |
| return null; | |
| } | |
| } |
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
between lines 217 and 241, enhance the resolveOutputFilePath method by adding
validation to prevent directory traversal attacks, such as normalizing the path
and ensuring it stays within an allowed base directory. Replace the broad
Exception catch with more specific exceptions to handle expected errors
explicitly. Before creating the file, add checks for write permissions and
available disk space to avoid failures or security issues.
💡 Verification agent
🧩 Analysis chain
Consider thread safety for concurrent file operations.
The resolveOutputFilePath method handles variable expansion, directory creation, and file creation well. However, consider potential race conditions if multiple processes attempt to create the same directory or file simultaneously.
Additionally, the method could benefit from more specific exception handling:
- } catch (Exception e) {
+ } catch (IOException e) {
+ Logger.log(e);
+ return null;
+ } catch (CoreException e) {
+ Logger.log(e);
+ return null;
+ } catch (Exception e) {
Logger.log(e);
return null;
}🏁 Script executed:
#!/bin/bash
# Description: Check if there are multiple instances of StreamListener that could create race conditions
# Expected: Find usage patterns to assess concurrent access risks
echo "=== Searching for StreamListener instantiation patterns ==="
rg -A 10 -B 5 "new StreamListener"
echo "=== Checking for concurrent process creation ==="
rg -A 5 -B 5 "StreamsProxy.*new.*StreamListener"
echo "=== Verifying if launch configurations can be shared ==="
rg -A 10 "ILaunch.*getAttribute.*CAPTURE_IN_FILE"Length of output: 3363
Refine exception handling and review concurrency in resolveOutputFilePath
-
In
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java(lines 217–241), replace the broad catch block with more specific handlers:private static String resolveOutputFilePath(String outputFile) { … try { … } catch (IOException e) { Logger.log(e); return null; } catch (CoreException e) { Logger.log(e); return null; } catch (Exception e) { Logger.log(e); return null; } } -
Although OS‐level
mkdirs()andcreateNewFile()are generally safe to call concurrently, ifresolveOutputFilePathcan be invoked in parallel (e.g., by multipleStreamListenerinstances inStreamsProxy.java), review whether you need stronger guarantees:- Use
Files.createDirectories(parent.toPath())andFiles.createFile(path)for atomic directory/file creation. - Synchronize access to this method or employ a file lock when targeting the same output path across threads.
- Use
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private static String resolveOutputFilePath(String outputFile) { | |
| if (outputFile == null || outputFile.isEmpty()) { | |
| return null; | |
| } | |
| try { | |
| // Expand Eclipse variables (e.g., ${workspace_loc:...}) | |
| String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile); | |
| File file = new File(expanded); | |
| if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) { | |
| // If it's a directory or ends with a separator, append openocd.log | |
| file = new File(file, "openocd.log"); | |
| } | |
| File parent = file.getParentFile(); | |
| if (parent != null && !parent.exists()) { | |
| parent.mkdirs(); | |
| } | |
| if (!file.exists()) { | |
| file.createNewFile(); | |
| } | |
| return file.getAbsolutePath(); | |
| } catch (Exception e) { | |
| Logger.log(e); | |
| return null; | |
| } | |
| } | |
| private static String resolveOutputFilePath(String outputFile) { | |
| if (outputFile == null || outputFile.isEmpty()) { | |
| return null; | |
| } | |
| try { | |
| // Expand Eclipse variables (e.g., ${workspace_loc:...}) | |
| String expanded = VariablesPlugin.getDefault().getStringVariableManager().performStringSubstitution(outputFile); | |
| File file = new File(expanded); | |
| if (file.isDirectory() || (!file.exists() && expanded.endsWith(File.separator))) { | |
| // If it's a directory or ends with a separator, append openocd.log | |
| file = new File(file, "openocd.log"); | |
| } | |
| File parent = file.getParentFile(); | |
| if (parent != null && !parent.exists()) { | |
| parent.mkdirs(); | |
| } | |
| if (!file.exists()) { | |
| file.createNewFile(); | |
| } | |
| return file.getAbsolutePath(); | |
| } catch (IOException e) { | |
| Logger.log(e); | |
| return null; | |
| } catch (CoreException e) { | |
| Logger.log(e); | |
| return null; | |
| } catch (Exception e) { | |
| Logger.log(e); | |
| return null; | |
| } | |
| } |
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
around lines 217 to 241, replace the broad Exception catch with specific
exception types to improve error handling clarity. Change directory and file
creation calls to use Files.createDirectories(parent.toPath()) and
Files.createFile(path) for atomic operations. Additionally, if this method can
be called concurrently for the same output path, add synchronization or file
locking to prevent race conditions during directory and file creation.
sigmaaa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @kolipakakondal , thanks for the PR — LGTM overall! I’ve added a few suggestions on how we can make the StreamListener changes a bit more optimized by avoiding some conditional checks and using autoclosable to improve maintainability.
....gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
Outdated
Show resolved
Hide resolved
....gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
Outdated
Show resolved
Hide resolved
....gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
Outdated
Show resolved
Hide resolved
....gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
Outdated
Show resolved
Hide resolved
|
Pushed a commit with minor improvements to the StreamListener class: made fileWriter autoclosable, avoided conditional checks by using nullWriter, and applied formatting. |
|
Tested under: Windows 11: everything works fine, I only have a few follow-up questions:
the logs for case when I run and stop debug process and trying to delete log file then:
but in case of
Mac arm64 / Linux Ubuntu 24.04
but with The documentation is clear enough. ✅ |
The first issue is fixed.
I'm currently working on this issue. After taking a second look at the code, I noticed it's not optimized—we're recreating the FileWriter on each stream update. |
|
Hi @kolipakakondal, @alirana01, @AndriiFilippov, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (8)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (6)
19-19: Consider using ConcurrentHashMap with bounded size.The static map could potentially grow indefinitely if many different file paths are used without proper cleanup, leading to memory leaks.
Consider implementing a bounded cache or adding monitoring for map size:
-private static final Map<String, PrintWriter> writers = new ConcurrentHashMap<>(); +private static final Map<String, PrintWriter> writers = new ConcurrentHashMap<>(); +// Consider adding: private static final int MAX_WRITERS = 100;
41-44: Remove redundant file existence check.The
file.exists()check is unnecessary sincecreateNewFile()is atomic and returnsfalseif the file already exists.-if (!file.exists()) -{ - file.createNewFile(); -} +file.createNewFile(); // Returns false if file already exists, which is fine
47-51: Consider adding debug logging for fallback to null writer.While the exception handling is correct, it might be helpful to log when falling back to null writer for debugging purposes.
catch (IOException e) { Logger.log(e); + // Consider adding debug log: "Falling back to null writer for path: " + p return new PrintWriter(Writer.nullWriter()); }
62-62: Consider making session end message localizable.The session end message is hardcoded in English. Consider using a localizable message for better internationalization support.
-writer.println("=== Session ended at " + LocalDateTime.now() + " ==="); //$NON-NLS-1$ //$NON-NLS-2$ +// Consider using Messages.getString() or similar localization mechanism +writer.println("=== Session ended at " + LocalDateTime.now() + " ==="); //$NON-NLS-1$ //$NON-NLS-2$
67-74: Consider consistency with session end timestamps.The
closeAll()method doesn't add session end timestamps likecloseWriter()does. This inconsistency might be intentional for cleanup scenarios, but should be documented or made consistent.public static void closeAll() { for (PrintWriter writer : writers.values()) { + // Consider adding session end timestamp for consistency: + // writer.println("=== Session ended at " + LocalDateTime.now() + " ==="); writer.close(); } writers.clear(); }
67-75: Potential resource leak in closeAll method.While the method closes all writers, it doesn't write session end messages like
closeWriterdoes. This inconsistency could make logs harder to analyze.Consider adding session end messages for consistency:
public static void closeAll() { for (PrintWriter writer : writers.values()) { + writer.println("=== Session ended at " + LocalDateTime.now() + " ==="); writer.close(); } writers.clear(); }tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java (2)
152-155: Good use of synchronization, but consider more comprehensive verification.The synchronized block correctly ensures thread-safe writes to the shared PrintWriter instance. However, the test could be more thorough in verifying concurrent behavior.
Consider adding verification for the total number of lines written:
String content = Files.readString(logFile); assertTrue(content.contains("Line 0")); //$NON-NLS-1$ assertTrue(content.contains("Line 49")); //$NON-NLS-1$ +// Verify total lines written (5 tasks × 50 lines each = 250 lines) +long lineCount = content.lines().count(); +assertTrue(lineCount >= 250, "Expected at least 250 lines, got " + lineCount);
142-177: Consider adding tests for additional edge cases.The current test coverage is good, but consider adding tests for:
- Invalid file paths (e.g., paths with invalid characters)
- Directory creation scenarios (verify parent directories are created)
- Session end timestamp verification (check that closeWriter adds the timestamp)
Example test for directory creation:
@Test void testDirectoryCreation() throws IOException { Path nestedPath = tempDir.resolve("nested/dir/test.log"); PrintWriter writer = LogFileWriterManager.getWriter(nestedPath.toString(), false); writer.println("Test content"); writer.flush(); assertTrue(Files.exists(nestedPath)); assertTrue(Files.exists(nestedPath.getParent())); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java(1 hunks)bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java(2 hunks)bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java(8 hunks)bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java(3 hunks)tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java(1 hunks)
🧬 Code Graph Analysis (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/Logger.java (1)
Logger(21-137)
🚧 Files skipped from review as they are similar to previous changes (3)
- bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/monitors/StreamsProxy.java
- bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/IdfRuntimeProcess.java
- bundles/com.espressif.idf.debug.gdbjtag.openocd/src/com/espressif/idf/debug/gdbjtag/openocd/dsf/process/StreamListener.java
🧰 Additional context used
🧬 Code Graph Analysis (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (1)
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/Logger.java (1)
Logger(21-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build_macos
- GitHub Check: build
🔇 Additional comments (7)
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java (5)
25-43: Well-structured test class with proper lifecycle management.The test class follows good practices with temporary directory management, proper setup/teardown, and comprehensive test coverage.
25-43: Well-structured test class with proper setup.Good use of JUnit 5 features like
@TempDirfor test isolation and proper cleanup intearDown()method to prevent test interference.
106-112: Test exposes the append parameter caching issue.This test verifies that the same writer instance is returned for the same path, but it doesn't consider that both calls use
append=true. This test would fail to catch the issue I identified in the main class where different append values for the same path return the same writer.Add a test to verify append parameter behavior:
@Test void testDifferentAppendModesForSamePath() { PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), true); PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false); // These should be different instances since append modes differ assertNotSame(writer1, writer2, "Writers with different append modes should be different instances"); }
142-178: Excellent thread safety test with proper synchronization.The concurrent access test is well-designed and properly uses synchronization on the writer object to prevent race conditions during writing. The test validates that concurrent access works correctly.
93-104: Good coverage of null and empty path handling.The test properly verifies that null and empty paths return non-null writers that don't throw exceptions when used, which matches the nullWriter behavior expected.
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java (2)
17-24: Well-designed utility class with proper encapsulation.The class follows the utility pattern correctly with a private constructor to prevent instantiation and uses a thread-safe ConcurrentHashMap for managing writers.
55-65: Excellent session logging in closeWriter method.The addition of a session end timestamp when closing writers is a thoughtful touch that will help with debugging and log analysis.
| public static PrintWriter getWriter(String path, boolean append) | ||
| { | ||
| if (path == null || path.isEmpty()) | ||
| { | ||
| return new PrintWriter(Writer.nullWriter()); | ||
| } | ||
|
|
||
| return writers.computeIfAbsent(path, p -> { | ||
| try | ||
| { | ||
| File file = new File(p); | ||
| File parent = file.getParentFile(); | ||
| if (parent != null && !parent.exists()) | ||
| { | ||
| parent.mkdirs(); | ||
| } | ||
| if (!file.exists()) | ||
| { | ||
| file.createNewFile(); | ||
| } | ||
| return new PrintWriter(new BufferedWriter(new FileWriter(file, append)), true); | ||
| } | ||
| catch (IOException e) | ||
| { | ||
| Logger.log(e); | ||
| return new PrintWriter(Writer.nullWriter()); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix append mode handling for existing writers.
The current implementation ignores the append parameter for existing writers. If a writer already exists for a path, the append mode from the first call is used for all subsequent calls, which could lead to unexpected behavior.
Consider this scenario:
- First call:
getWriter("file.log", false)- creates writer in overwrite mode - Second call:
getWriter("file.log", true)- returns same writer, still in overwrite mode
Apply this refactor to include append mode in the key:
-return writers.computeIfAbsent(path, p -> {
+String key = path + ":" + append;
+return writers.computeIfAbsent(key, k -> {Also update the closeWriter method accordingly:
-public static void closeWriter(String path)
+public static void closeWriter(String path, boolean append)
{
- if (path == null || path.isEmpty())
- return;
- PrintWriter writer = writers.remove(path);
+ if (path == null || path.isEmpty())
+ return;
+ String key = path + ":" + append;
+ PrintWriter writer = writers.remove(key);Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java
between lines 25 and 53, the getWriter method currently uses only the file path
as the key in the writers map, ignoring the append parameter for existing
writers. To fix this, modify the key to combine both the path and append mode
(e.g., a composite key or a string concatenation) so that writers with different
append modes are stored separately. Also update the closeWriter method to accept
and use this new key format to correctly close the intended writer.
Critical issue: Append parameter is ignored for existing writers.
The computeIfAbsent method only creates a new writer if the path doesn't exist in the map. However, the append parameter is only considered during writer creation, not when retrieving an existing writer. This means:
- If a writer is requested with
append=false, it's created and cached - Subsequent requests with
append=truefor the same path will return the same writer (which was created withappend=false)
This could lead to unexpected behavior where the append mode doesn't match the caller's intention.
Consider this approach to ensure append behavior is consistent:
-public static PrintWriter getWriter(String path, boolean append)
+public static PrintWriter getWriter(String path, boolean append)
{
if (path == null || path.isEmpty())
{
return new PrintWriter(Writer.nullWriter());
}
- return writers.computeIfAbsent(path, p -> {
+ // Create a composite key that includes append mode
+ String key = path + "|append=" + append;
+ return writers.computeIfAbsent(key, k -> {
try
{
- File file = new File(p);
+ File file = new File(path);
File parent = file.getParentFile();
if (parent != null && !parent.exists())
{
parent.mkdirs();
}
if (!file.exists())
{
file.createNewFile();
}
return new PrintWriter(new BufferedWriter(new FileWriter(file, append)), true);
}
catch (IOException e)
{
Logger.log(e);
return new PrintWriter(Writer.nullWriter());
}
});
}And update the closeWriter method accordingly:
public static void closeWriter(String path)
{
if (path == null || path.isEmpty())
return;
- PrintWriter writer = writers.remove(path);
+ // Remove both append modes for the path
+ PrintWriter writer1 = writers.remove(path + "|append=true");
+ PrintWriter writer2 = writers.remove(path + "|append=false");
+ PrintWriter writer = writer1 != null ? writer1 : writer2;
if (writer != null)
{
writer.println("=== Session ended at " + LocalDateTime.now() + " ===");
writer.close();
}
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java
between lines 25 and 53, the getWriter method ignores the append parameter for
existing writers because it caches writers by path only. To fix this, modify the
caching key to include both path and append mode (e.g., use a composite key or a
map keyed by path and append flag) so that writers with different append modes
are stored separately. Update the closeWriter method to remove the writer from
the cache using the same composite key to ensure proper cleanup.
| File parent = file.getParentFile(); | ||
| if (parent != null && !parent.exists()) | ||
| { | ||
| parent.mkdirs(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential security issue: Directory creation without validation.
The mkdirs() call could create directories in unintended locations if the path is malicious or contains path traversal sequences (e.g., ../../../etc/passwd).
Add path validation before creating directories:
if (parent != null && !parent.exists())
{
+ // Validate parent path to prevent directory traversal attacks
+ String canonicalParent = parent.getCanonicalPath();
+ String expectedParent = parent.getAbsolutePath();
+ if (!canonicalParent.equals(expectedParent))
+ {
+ throw new IOException("Invalid path detected: " + path);
+ }
parent.mkdirs();
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
bundles/com.espressif.idf.core/src/com/espressif/idf/core/logging/LogFileWriterManager.java
at line 39, the call to parent.mkdirs() creates directories without validating
the path, which can lead to security risks like path traversal. Before calling
mkdirs(), add validation to ensure the directory path is within an allowed base
directory and does not contain any traversal sequences or invalid characters.
Reject or sanitize paths that do not meet these criteria to prevent creating
directories in unintended locations.
| @Test | ||
| void testAppendModeFalseCreatesNewFile() throws IOException | ||
| { | ||
| PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false); | ||
| writer1.println("Initial Line"); //$NON-NLS-1$ | ||
| writer1.flush(); | ||
|
|
||
| // Manually close and remove | ||
| LogFileWriterManager.closeWriter(logFile.toString()); | ||
|
|
||
| PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false); | ||
| writer2.println("New Line"); //$NON-NLS-1$ | ||
| writer2.flush(); | ||
|
|
||
| String content = Files.readString(logFile); | ||
| assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$ | ||
| // NOTE: This test doesn't guarantee truncation unless we reimplement logic | ||
| // to forcibly truncate when append=false | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test reveals the append mode issue in LogFileWriterManager.
This test acknowledges that truncation behavior isn't guaranteed when append=false, which confirms the issue I identified in the main class where append mode is only applied during writer creation.
The weak assertion on line 87 masks the actual problem:
-assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$
-// NOTE: This test doesn't guarantee truncation unless we reimplement logic
-// to forcibly truncate when append=false
+// This test should verify proper truncation behavior
+assertTrue(content.contains("New Line")); //$NON-NLS-1$
+assertFalse(content.contains("Initial Line")); //$NON-NLS-1$However, this test will fail until the append mode handling is fixed in LogFileWriterManager.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Test | |
| void testAppendModeFalseCreatesNewFile() throws IOException | |
| { | |
| PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer1.println("Initial Line"); //$NON-NLS-1$ | |
| writer1.flush(); | |
| // Manually close and remove | |
| LogFileWriterManager.closeWriter(logFile.toString()); | |
| PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer2.println("New Line"); //$NON-NLS-1$ | |
| writer2.flush(); | |
| String content = Files.readString(logFile); | |
| assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$ | |
| // NOTE: This test doesn't guarantee truncation unless we reimplement logic | |
| // to forcibly truncate when append=false | |
| } | |
| @Test | |
| void testAppendModeFalseCreatesNewFile() throws IOException | |
| { | |
| PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer1.println("Initial Line"); //$NON-NLS-1$ | |
| writer1.flush(); | |
| // Manually close and remove | |
| LogFileWriterManager.closeWriter(logFile.toString()); | |
| PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer2.println("New Line"); //$NON-NLS-1$ | |
| writer2.flush(); | |
| String content = Files.readString(logFile); | |
| // This test should verify proper truncation behavior | |
| assertTrue(content.contains("New Line")); //$NON-NLS-1$ | |
| assertFalse(content.contains("Initial Line")); //$NON-NLS-1$ | |
| } |
🤖 Prompt for AI Agents
In
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java
around lines 72 to 90, the test does not guarantee that the file is truncated
when append mode is false, masking the issue in LogFileWriterManager where
append mode is only applied during writer creation. To fix this, update
LogFileWriterManager to ensure that when append is false, the file is properly
truncated or overwritten before creating the writer, so the test can reliably
verify that old content is removed and only new content is written.
Test logic issue: Append mode behavior not properly tested.
The test testAppendModeFalseCreatesNewFile has a misleading comment and doesn't actually test what it claims. The test:
- Creates a writer with
append=false - Closes the writer manually
- Creates a new writer with
append=false - Checks if content contains either line (not guaranteeing overwrite behavior)
This doesn't test the actual append=false behavior because the same writer instance is being reused due to the caching mechanism.
The test should verify that append=false actually overwrites the file:
@Test
-void testAppendModeFalseCreatesNewFile() throws IOException
+void testAppendModeFalseOverwritesFile() throws IOException
{
- PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false);
+ // First, write some content with append=true
+ PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), true);
writer1.println("Initial Line");
writer1.flush();
+ LogFileWriterManager.closeWriter(logFile.toString());
- // Manually close and remove
- LogFileWriterManager.closeWriter(logFile.toString());
-
- PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
+ // Now create a writer with append=false - should overwrite
+ PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false);
writer2.println("New Line");
writer2.flush();
String content = Files.readString(logFile);
- assertTrue(content.contains("Initial Line") || content.contains("New Line"));
- // NOTE: This test doesn't guarantee truncation unless we reimplement logic
- // to forcibly truncate when append=false
+ assertTrue(content.contains("New Line"));
+ assertFalse(content.contains("Initial Line"), "File should be overwritten, not appended to");
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Test | |
| void testAppendModeFalseCreatesNewFile() throws IOException | |
| { | |
| PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer1.println("Initial Line"); //$NON-NLS-1$ | |
| writer1.flush(); | |
| // Manually close and remove | |
| LogFileWriterManager.closeWriter(logFile.toString()); | |
| PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer2.println("New Line"); //$NON-NLS-1$ | |
| writer2.flush(); | |
| String content = Files.readString(logFile); | |
| assertTrue(content.contains("Initial Line") || content.contains("New Line")); //$NON-NLS-1$ //$NON-NLS-2$ | |
| // NOTE: This test doesn't guarantee truncation unless we reimplement logic | |
| // to forcibly truncate when append=false | |
| } | |
| @Test | |
| void testAppendModeFalseOverwritesFile() throws IOException | |
| { | |
| // First, write some content with append=true | |
| PrintWriter writer1 = LogFileWriterManager.getWriter(logFile.toString(), true); | |
| writer1.println("Initial Line"); | |
| writer1.flush(); | |
| LogFileWriterManager.closeWriter(logFile.toString()); | |
| // Now create a writer with append=false - should overwrite | |
| PrintWriter writer2 = LogFileWriterManager.getWriter(logFile.toString(), false); | |
| writer2.println("New Line"); | |
| writer2.flush(); | |
| String content = Files.readString(logFile); | |
| assertTrue(content.contains("New Line")); | |
| assertFalse(content.contains("Initial Line"), "File should be overwritten, not appended to"); | |
| } |
🤖 Prompt for AI Agents
In
tests/com.espressif.idf.core.test/src/com/espressif/idf/core/util/test/LogFileWriterManagerTest.java
around lines 72 to 90, the testAppendModeFalseCreatesNewFile test does not
properly verify that append=false overwrites the file because it reuses the
cached writer instance. To fix this, modify the test to ensure the file is
truncated by either clearing the cache or deleting the file before creating the
second writer with append=false, then assert that the file content only contains
the new line and not the initial line, confirming overwrite behavior.
|
@sigmaaa hi ! Tested under: able to delete .log file ✅ LGTM 👍 |
|
@kolipakakondal @alirana01 hi! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sigmaaa for improving this PR and addressing the PR comments. LGTM










Description
This PR adds the capability to capture and write OpenOCD debug output to a file, providing better debugging and logging capabilities
Fixes # (IEP-1536)
Type of change
How has this been tested?
Scenarios verified
Test Configuration:
Dependent components impacted by this PR:
Checklist
Summary by CodeRabbit
New Features
Bug Fixes