fix(process): Dispatch heap analysis events to main process listeners (#1789)#2812
Open
faraz152 wants to merge 2 commits intosquare:mainfrom
Open
fix(process): Dispatch heap analysis events to main process listeners (#1789)#2812faraz152 wants to merge 2 commits intosquare:mainfrom
faraz152 wants to merge 2 commits intosquare:mainfrom
Conversation
When heap analysis runs in a separate :leakcanary process via RemoteHeapAnalyzerWorker, the HeapAnalysisDone event is only dispatched to listeners in the background process. Custom OnHeapAnalyzedListener callbacks registered in the main process never receive the result. This fix serializes the analysis result to a shared file in the worker, then passes the file path as WorkManager output data. The main process observes work completion via WorkInfo LiveData, reads the event back, and re-dispatches it to all configured event listeners. Fixes square#1789
pyricau
approved these changes
Mar 30, 2026
Member
pyricau
left a comment
There was a problem hiding this comment.
Thank you! The issue makes sense, and I like the approach. Left some feedback and a few open questions.
| try { | ||
| val doneEvent = Serializables.fromByteArray<HeapAnalysisDone<*>>(eventFile.readBytes()) | ||
| if (doneEvent != null) { | ||
| SharkLog.d { "Dispatching remote heap analysis result to main process listeners" } |
Member
There was a problem hiding this comment.
Probably don't need this log in the success case, we'll see an event anyway
- remove?
| } catch (e: Throwable) { | ||
| SharkLog.d(e) { "Error reading remote worker event file" } | ||
| } finally { | ||
| eventFile.delete() |
Member
There was a problem hiding this comment.
If the main process dies after scheduling the RemoteHeapAnalyzerWorker, then it'll never listen to the result (=> no update) and also the result file will never be deleted. Any way we could avoid that? Maybe re observing on startup? Mayble cleanup on startup? idk
| // Observe the remote worker's completion from the main process so we can | ||
| // re-dispatch the HeapAnalysisDone event to listeners running here. | ||
| val workInfoLiveData = workManager.getWorkInfoByIdLiveData(heapAnalysisRequest.id) | ||
| Handler(Looper.getMainLooper()).post { |
Member
There was a problem hiding this comment.
- Use
mainHandlerfromHandlers.ktinstead
- Remove success log in dispatchEventFromRemoteWorker (redundant, the event dispatch itself is visible) - Use mainHandler from Handlers.kt instead of Handler(Looper.getMainLooper()) - Add dispatchAndCleanupOrphanedEventFiles() to handle the case where the main process dies after scheduling RemoteHeapAnalyzerWorker: on startup, scan filesDir for leftover event files, dispatch any recoverable events, and delete all orphaned files
Author
|
Pushed fixes for all 3 comments:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
When heap analysis runs in a separate
:leakcanaryprocess viaRemoteHeapAnalyzerWorker, theHeapAnalysisDoneevent is dispatched only to listeners in the background process. CustomEventListenercallbacks configured in the main process never receive the analysis result.This PR bridges the gap by serializing the done event in the background process and making it available to the main process through WorkManager's output data mechanism.
Related Issue
Fixes #1789
Changes
RemoteHeapAnalyzerWorker(background process)HeapAnalysisDoneevent to a file in the app'sfilesDir(shared across processes)Result.success(outputData)InternalLeakCanary.sendEvent()in the background process for backward compatibilityRemoteWorkManagerHeapAnalyzer(main process)WorkInfovia LiveDataHeapAnalysisDoneevent to main process listenersDesign Decisions
Datahas a 10 KB limit. Heap analysis events can exceed this due to the serializedHeapAnalysisobject. Writing to a file infilesDir(accessible by both processes of the same app) avoids this limit.sendEvent(), so existing setups that configure listeners in the background process continue to work.Testing
./gradlew :leakcanary:leakcanary-android-core:compileDebugKotlinSerializablespattern used for WorkManager input dataSerializable(by design, per theEventListener.Eventclass doc)Checklist