-
-
Notifications
You must be signed in to change notification settings - Fork 21
Face detection and position tracking #268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request introduces comprehensive real-time face detection capabilities to the capacitor-camera-preview plugin. It adds native implementations for both Android (using ML Kit) and iOS (using Vision framework), along with coordinate transformation, face alignment validation, and lighting quality detection utilities. An example app demonstrates the feature with interactive UI components for face overlays, guidance, and live demo pages. Changes
Sequence Diagram(s)sequenceDiagram
participant JS as JavaScript/App
participant Plugin as Camera Plugin
participant Detector as FaceDetectionManager
participant MLKit as ML Kit / Vision
participant Listener as JS Event Listener
JS->>Plugin: startFaceDetection(options)
activate Plugin
Plugin->>Detector: initialize with options
Plugin->>Detector: start()
deactivate Plugin
loop Frame Processing
Detector->>MLKit: processFrame(ImageProxy)
activate MLKit
MLKit-->>Detector: VNFaceObservation[]
deactivate MLKit
Detector->>Detector: transform bounds & landmarks
Detector->>Detector: apply throttling/motion detection
Detector->>Plugin: onFaceDetectionResult(JSONObject)
end
Plugin->>Listener: emit faceDetection event
Listener->>JS: callback(FaceDetectionResult)
JS->>Plugin: stopFaceDetection()
activate Plugin
Plugin->>Detector: stop()
Plugin->>MLKit: release detector
deactivate Plugin
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
996-1014: Call faceDetectionManager.stop() when camera stops to release Vision framework resources.The
CameraController.cleanup()method should callfaceDetectionManager?.stop()before cleaning up camera resources. The FaceDetectionManager has proper cleanup logic (waiting for in-flight Vision requests, resetting tracking state), but it's never invoked during the camera stop flow, leaving Vision framework resources and background dispatch queues potentially active.ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
439-485: Frame‑rate tuning is only partially effective and currently applies even without face detectionTwo things to consider in this block:
configureSessionPreset(for:)is called beforeconfigureDeviceInputs(...)inprepare(...), so on first runcurrentCameraPositionand the front/rear devices are still nil.configureFrameRateForFaceDetection()will just early‑return, meaning the 20–25 FPS optimization only kicks in on later calls (e.g.updateAspectRatio). If you want consistent behavior, consider invokingconfigureFrameRateForFaceDetection()again after inputs are configured (or move the call behind a point wherecurrentCameraPositionand the active device are known).configureSessionPreset(for:)unconditionally callsconfigureFrameRateForFaceDetection(). That means even apps that never use face detection will get their active format and FPS clamped around 20–25 FPS, which may be surprising for high‑FPS video or non‑Vision use cases. Gating this behind a flag orfaceDetectionManager?.isRunning() == true(or wiring it from the plugin’sstartFaceDetectionpath) would avoid changing baseline behavior for non‑face‑detection consumers.Additionally, the new log lines use escaped interpolation:
print("[CameraController] Configured frame rate: \\(targetFPS) FPS") print("[CameraController] Failed to configure frame rate: \\(error)")Those will literally print
\(targetFPS)/\(error)instead of the values. Dropping one backslash fixes it:- print("[CameraController] Configured frame rate: \\(targetFPS) FPS") + print("[CameraController] Configured frame rate: \(targetFPS) FPS") - print("[CameraController] Failed to configure frame rate: \\(error)") + print("[CameraController] Failed to configure frame rate: \(error)")Also applies to: 539-589
README.md (1)
353-367: Face-detection docs mostly align, but some field optionality and options are out of sync with codeThe new README sections do a good job exposing the face‑detection API surface (listener, start/stop, running check, and the core types), but there are a couple of mismatches with the actual typings/implementation:
- In
DetectedFace, the README tables showtrackingId,rollAngle,yawAngle,pitchAngle, andlandmarksas required. Insrc/definitions.ts, these fields are defined as optional (?) and can legitimately be absent (e.g., tracking disabled, platform doesn’t provide a given angle, landmarks off). The same applies to the individual fields inFaceLandmarks. Marking them as optional in the doc tables would better reflect real behavior.FaceDetectionOptionsin the README documentsperformanceMode,trackingEnabled,detectLandmarks,maxFaces, andminFaceSize, but the TS/examples also pass adetectClassificationsflag. If classifications are meant to be public API, it should be included here (and wired through the native implementations); otherwise, consider omitting it from examples to avoid confusion.Once you update the JSDoc on the TS definitions, re‑running docgen should bring this section into full alignment with the actual API.
Also applies to: 919-937, 1092-1135, 1302-1393
🧹 Nitpick comments (37)
CapgoCameraPreview.podspec (1)
16-18: Remove extra blank line before block end.RuboCop flags an extra empty line before the block closing. This is a minor style issue that can be resolved by removing the blank line.
Apply this diff to align with Ruby style conventions:
s.swift_version = '5.1' - endexample-app/ios/App/App.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (1)
1-8: Add this file to.gitignoreinstead of committing it.
IDEWorkspaceChecks.plistis a machine-generated Xcode workspace configuration file that should not be committed to version control. Different developers or Xcode versions may generate different contents, leading to unnecessary diffs and merge conflicts.Move this file to
.gitignoreto maintain a clean repository and avoid IDE-specific configuration churn.example-app/ios/App/App.xcodeproj/xcshareddata/xcschemes/App.xcscheme (1)
4-4: LGTM! Auto-generated Xcode version bump.This version attribute change is automatically managed by Xcode and is harmless. It's incidental to the face detection features being added and could optionally be omitted from the PR to keep it focused on the core functionality.
example-app/src/app/components/media-viewer/media-viewer.component.ts (3)
1-6: Imports and DI usage are solid; consider tightening ElementRef typingUsing
inject(...)forModalController,GestureController, andElementRefin a standalone component is idiomatic and keeps the constructor clean. For a bit more type safety you could declareprotected readonly el = inject<ElementRef<HTMLElement>>(ElementRef);so the host element type is explicit, but this is purely optional.Also applies to: 24-28
43-77: Swipe-down gesture works; consider preserving any pre-existing transformsThe swipe behavior and opacity feedback are clear and the thresholded close vs. snap-back logic looks good. One subtle improvement: if
ion-contentever has a non-emptytransformfrom CSS/animations, resetting it withcontent.style.transform = ''on snap-back will drop that. You could capture the initialtransforminonStartand restore it in the snap-back branch to be fully non-destructive.
79-89: Backdrop click handling works; consider moving to a template/HostListener patternUsing
addEventListeneron.media-containerand checkingevent.target === containercorrectly limits closing to backdrop clicks. Since Angular will destroy the element, this won’t practically leak, but for consistency with the rest of the app you might later move this to a(click)binding or@HostListener('click')so Angular manages subscription/teardown for you.example-app/ios/App/Podfile (2)
1-1: Bun-specificpods_helperspath is fine but tightly couples to Capacitor/Bun versionsThe
require_relativeinto.bun/@[email protected]+…matches Bun’s store layout and should correctly exposeassertDeploymentTarget, but it hard-codes both the Capacitor version and Bun’s hash into the Podfile. That’s OK if this file is regenerated as part of your Bun/Capacitor upgrade flow, but otherwise any version bump will require a manual edit here.If that’s intentional, consider adding a short comment explaining that this path is Bun-generated and must be updated when upgrading Capacitor, so future maintainers don’t “fix” it to a different layout by mistake.
12-19: Pod paths look consistent with Bun layout; consider DRY-ing Capacitor paths and verify CapgoCameraPreview intentThe pod entries all point into Bun’s
.bunstore and look consistent with a Bun-managed Capacitor 7 setup, including the additional CapacitorApp/Haptics/Keyboard/StatusBar and CapgoHomeIndicator pods. Two small maintainability points:
- You repeat the long Capacitor iOS path twice for
CapacitorandCapacitorCordova. You can slightly simplify future upgrades by binding it to a local variable:def capacitor_pods - pod 'Capacitor', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/ios' - pod 'CapacitorCordova', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/ios' + capacitor_ios_path = '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/ios' + pod 'Capacitor', :path => capacitor_ios_path + pod 'CapacitorCordova', :path => capacitor_ios_path pod 'CapacitorApp', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/app' pod 'CapacitorHaptics', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/haptics' pod 'CapacitorKeyboard', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/keyboard' pod 'CapacitorStatusBar', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/status-bar' pod 'CapgoCameraPreview', :path => '../../node_modules/.bun/@capgo+camera-preview@file+../node_modules/@capgo/camera-preview' pod 'CapgoHomeIndicator', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capgo/home-indicator' end
- For
CapgoCameraPreview, the Bunfile+../node_modules/@capgo/camera-previewpath is a bit opaque to someone not familiar with Bun’s store layout. It would be good to double-check thatpod installpicks up the intended podspec from this location and, if so, maybe add a short comment like “# Bun store path for local @capgo/camera-preview dependency” for clarity.If your goal is for the example app to always use the version from
bun.lockb, this setup is reasonable; if instead you want it to track the plugin source in this monorepo, you might consider switching:pathto a direct relative path to the plugin project instead.src/web.ts (1)
1260-1263: Use properFaceDetectionOptionstype instead ofanyfor face detection config.The
FaceDetectionOptionstype is defined indefinitions.tsand used in the interface at line 1179, but the web.ts implementation usesany. Update the parameter type for consistency with the rest of the codebase and to provide better IDE support:- async startFaceDetection(_config?: any): Promise<void> { + async startFaceDetection(_config?: FaceDetectionOptions): Promise<void> {Also note: The JSDoc parameter is documented as
@param _optionsbut the actual parameter name is_config—align these.example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (2)
1-3: CommonModule import may be unnecessary.With Angular 17+ control flow syntax (
@if,@else) being used in the template,CommonModuleis not required unless you're using other directives likeNgClassorNgStyle. The[class.aligned]binding is native template syntax and doesn't require CommonModule.-import { Component, Input, signal } from '@angular/core'; -import { CommonModule } from '@angular/common'; +import { Component, Input, signal } from '@angular/core';@Component({ selector: 'app-face-guidance-overlay', standalone: true, - imports: [CommonModule], + imports: [],
114-155: Unused CSS: countdown timer styles are not rendered.The
.countdown-timer,.countdown-number,.countdown-textclasses and@keyframes countdown-pulseanimation are defined but not used in the template. If this is planned for future implementation, consider adding a TODO comment or removing the dead code.ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (1)
182-193: Hardcoded thresholds inrecommendedExposureCompensationdon't use configurable values.The exposure compensation calculation uses hardcoded values
0.4and0.7, while the class allows configurableminBrightness(default 0.25) andmaxBrightness(default 0.85) thresholds. This inconsistency could lead to unexpected behavior when custom thresholds are provided.The Android counterpart uses the same hardcoded values, so this may be intentional for cross-platform consistency.
Consider using the configured thresholds or dedicated compensation thresholds:
public var recommendedExposureCompensation: Float { if isTooDark { // Increase exposure for dark scenes - return Float(min(2.0, (0.4 - brightnessLevel) * 3.0)) + return Float(min(2.0, (optimalMin - brightnessLevel) * 3.0)) } else if isTooBright { // Decrease exposure for bright scenes - return Float(max(-2.0, (0.7 - brightnessLevel) * 3.0)) + return Float(max(-2.0, (optimalMax - brightnessLevel) * 3.0)) } return 0.0 // No adjustment needed }Note: This would require passing the thresholds to
LightingResultor making it a method on the detector class.example-app/README.md (1)
126-144: Add language specifier to fenced code block.The project structure code block is missing a language specifier. While directory trees don't have a standard language, you can use
textorplaintextto satisfy linters.-``` +```text example-app/ ├── src/example-app/src/app/services/face-alignment.service.ts (1)
43-43: Remove empty constructor.The empty constructor is unnecessary and can be removed. Angular's DI will work without it.
- constructor() {} - /** * Update validation configuration */example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (2)
48-48: Avoidanytype for the listener reference.Using
anyloses type safety. The listener returned byaddListenershould be typed asPluginListenerHandle.- private faceDetectionListener: any; + private faceDetectionListener: import('@capacitor/core').PluginListenerHandle | null = null;
143-149: Await async methods to prevent race conditions.
toggleFaceDetectioncalls async methods without awaiting, which could cause race conditions if the user toggles rapidly.- toggleFaceDetection() { + async toggleFaceDetection() { if (this.isFaceDetectionActive()) { - this.stopFaceDetection(); + await this.stopFaceDetection(); } else { - this.startFaceDetection(); + await this.startFaceDetection(); } }android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (2)
160-202: Consider encapsulatingAlignmentResultfields.Using public fields directly is unconventional in Java. While acceptable for simple DTOs, consider using private fields with getters for better encapsulation and consistency with Java conventions.
Alternatively, this pattern is acceptable for internal utility classes where simplicity is preferred over strict encapsulation.
193-200: Consider importingArrayListat the top of the file.Using the fully qualified name
java.util.ArrayListinline works but importing it at the top would be cleaner.package app.capgo.capacitor.camera.preview; + +import java.util.ArrayList;Then use
ArrayList<String>directly on line 194.ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (1)
40-45: Remove or guard the debug print statement.The
print()statement in the initializer will log to the console in production builds. Consider removing it or using a conditional check / proper logging framework.- print(""" - [FaceCoordTransformer] Created: frame=\(Int(frameWidth))x\(Int(frameHeight)), \ - preview=\(Int(previewWidth))x\(Int(previewHeight)), \ - gravity=\(videoGravity.rawValue), orientation=\(orientation.rawValue) - """) + #if DEBUG + print(""" + [FaceCoordTransformer] Created: frame=\(Int(frameWidth))x\(Int(frameHeight)), \ + preview=\(Int(previewWidth))x\(Int(previewHeight)), \ + gravity=\(videoGravity.rawValue), orientation=\(orientation.rawValue) + """) + #endifios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift (1)
4-9: Test coverage is minimal for the new face detection feature.This test only verifies the plugin instantiates. Consider adding tests for the new face detection functionality (start/stop detection, result parsing, error handling).
Would you like me to generate unit test stubs for the face detection features?
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.html (1)
54-67: Consider using more semantically appropriate icons for the control chips.The current icons (
flash,flash-off,camera) don't clearly represent their functions (Guidance, Debug, Landmarks). Consider:
- Guidance:
compass-outlineornavigate-outline- Debug:
bug-outlineorcode-outline- Landmarks:
location-outlineorpin-outline- <ion-chip (click)="toggleGuidanceOverlay()" [color]="showGuidanceOverlay() ? 'primary' : 'medium'"> - <ion-icon name="flash"></ion-icon> + <ion-chip (click)="toggleGuidanceOverlay()" [color]="showGuidanceOverlay() ? 'primary' : 'medium'"> + <ion-icon name="compass-outline"></ion-icon> Guidance </ion-chip> - <ion-chip (click)="toggleDebugOverlay()" [color]="showDebugOverlay() ? 'primary' : 'medium'"> - <ion-icon name="flash-off"></ion-icon> + <ion-chip (click)="toggleDebugOverlay()" [color]="showDebugOverlay() ? 'primary' : 'medium'"> + <ion-icon name="bug-outline"></ion-icon> Debug </ion-chip> - <ion-chip (click)="toggleLandmarks()" [color]="showLandmarks() ? 'primary' : 'medium'"> - <ion-icon name="camera"></ion-icon> + <ion-chip (click)="toggleLandmarks()" [color]="showLandmarks() ? 'primary' : 'medium'"> + <ion-icon name="location-outline"></ion-icon> Landmarks </ion-chip>android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (1)
3549-3555: Minor thread safety consideration.The analyzer callback runs on
cameraExecutorwhilefaceDetectionManagercan be set to null from the main thread indisableFaceDetection(). The null check protects against NPE, but consider makingfaceDetectionManagervolatile for visibility guarantees across threads.- private FaceDetectionManager faceDetectionManager; + private volatile FaceDetectionManager faceDetectionManager;example-app/src/app/pages/face-detection-demo/face-detection-demo.page.scss (4)
53-69: Considerwill-change: transforminstead ofwill-change: contents.The
will-change: contentsvalue has limited browser support and may not provide the intended optimization. For GPU-accelerated overlays,will-change: transformorwill-change: opacityare more reliable..face-overlay { position: fixed; top: 0; left: 0; width: 100%; height: 100%; pointer-events: none; z-index: 9000; - // Performance: Enable GPU acceleration and prevent layout shifts - will-change: contents; + // Performance: Enable GPU acceleration + will-change: transform; transform: translateZ(0);
178-198: Extremely high z-index may conflict with other UI layers.The
z-index: 100000on.stats-sheetis very high and could unexpectedly overlap system UI or modal dialogs. Consider using a more conservative value or a CSS custom property for coordinated z-index management..stats-sheet { position: fixed; left: 50%; transform: translateX(-50%); top: 0%; /* Default starting position, overridden by inline style */ width: min(560px, calc(100% - 24px)); max-height: calc(95vh - var(--ion-safe-area-top, 0px)); - z-index: 100000; /* Above all UI including header */ + z-index: 10001; /* Above header (10000) */
192-192: Comment contradicts the actual transition value.The comment says "Smooth dragging without transition delay" but
transition: top 0msmeans no transition at all. If smooth dragging is intended, a small duration like16msor32mswould be appropriate. If instant updates are intended, the comment should reflect that.- transition: top 0ms; /* Smooth dragging without transition delay */ + transition: top 0ms; /* Instant position updates for responsive dragging */
243-245: Remove empty lines between rule blocks.There are redundant empty lines that could be cleaned up for consistency.
android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java (2)
2046-2051: Face detection errors are silently logged without notifying JS.The
onFaceDetectionErrorcallback only logs the error. Consider emitting an error event to JS so the application can handle detection failures gracefully.@Override public void onFaceDetectionError(String error) { Log.e(TAG, "Face detection error: " + error); + JSObject errorData = new JSObject(); + errorData.put("error", error); + notifyListeners("faceDetectionError", errorData); }
103-111: FaceDetectionManager is not cleaned up in handleOnDestroy.When the plugin is destroyed,
faceDetectionManagershould be stopped and nullified to prevent resource leaks.@Override protected void handleOnDestroy() { super.handleOnDestroy(); + if (faceDetectionManager != null) { + faceDetectionManager.stopDetection(); + faceDetectionManager = null; + } if (cameraXView != null) { cameraXView.stopSession(); cameraXView = null; } lastSessionConfig = null; }ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
2415-2417: Face detection errors are silently logged without notifying JS.Similar to Android, the
faceDetectionDidFaildelegate method only logs the error. Consider emitting an error event to JS for consistent error handling across platforms.func faceDetectionDidFail(error: Error) { print("[FaceDetection] Detection failed: \(error.localizedDescription)") + notifyListeners("faceDetectionError", data: ["error": error.localizedDescription]) }example-app/src/app/core/capacitor-camera-preview.service.ts (2)
410-421: Multiple listener registrations will orphan previous listeners.If
addFaceDetectionListeneris called multiple times without first callingremoveFaceDetectionListener, the previous listener handle is overwritten and the old listener becomes orphaned (still active but unreferenceable).async addFaceDetectionListener( callback: (result: FaceDetectionResult) => void, ): Promise<void> { + // Remove existing listener before adding new one + await this.removeFaceDetectionListener(); this.faceDetectionListener = await this.#cameraView.addListener( 'faceDetection', (result: FaceDetectionResult) => { this.ngZone.run(() => { callback(result); }); }, ); }
385-392: Consider removing face detection listener when stopping detection.For cleaner resource management,
stopFaceDetectioncould automatically remove the listener to prevent stale callbacks.async stopFaceDetection(): Promise<void> { await this.#cameraView.stopFaceDetection(); + await this.removeFaceDetectionListener(); }example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (1)
1-427: Face-detection demo page is well-structured and lifecycle-awareThis page is nicely put together:
- Uses signals/computed +
ChangeDetectionStrategy.OnPushappropriately for a high‑frequency UI.- Starts the camera and face detection once in
ngOnInit, and cleans up correctly inngOnDestroyby stopping detection, removing the listener, stopping the camera, cancellingrequestAnimationFrame, and unregistering the resize listener.- Overlays for face boxes and landmarks use normalized coordinates and cached
screenWidth/screenHeight, which matches the full‑screen preview (x=0, y=0, width/height=window.inner*) used instartCameraWithFaceDetection().If you later want to support non‑full‑screen previews, you could switch
getFaceBoxStyle/getLandmarkStyleto derive dimensions fromframeWidth/frameHeightplus the actual preview container size, but that’s an optional enhancement for now.android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (2)
85-90: Consider using@OptInannotation for experimental API usage.The
@ExperimentalGetImageannotation on Line 86 is applied locally, but this experimental API usage should ideally be acknowledged at the method level for clarity.+ @androidx.annotation.OptIn(markerClass = androidx.camera.core.ExperimentalGetImage.class) private double calculateBrightness(ImageProxy image) { - @androidx.camera.core.ExperimentalGetImage android.media.Image mediaImage = image.getImage();
129-135: Consider using a builder pattern or constructor forLightingResult.Public mutable fields are convenient but allow inconsistent states. Since this is an internal utility, this is acceptable, but consider making fields
finaland using a constructor for immutability.ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (3)
282-282: Redundant nil initialization.Per SwiftLint hint, initializing an optional variable with
nilis redundant since optionals default tonil.// Extract landmarks if requested - var landmarks: FaceLandmarks? = nil + var landmarks: FaceLandmarks?
407-424: Motion detection implementation doesn't actually detect motion.The current implementation only checks if frame timestamps differ, which is always true for consecutive camera frames. The comment on Line 415 acknowledges this: "More sophisticated: compare pixel data or use optical flow."
Since this will always return
truefor any two distinct frames,motionDetectionEnabledeffectively does nothing useful. Consider either implementing actual motion detection or removing this feature to avoid misleading API surface.
477-484:FaceDetectionOptionsshould conform toCodablefor consistency.All other data models (
FaceDetectionResult,DetectedFace,FaceBounds,FaceLandmarks,Point) conform toCodable, butFaceDetectionOptionsdoes not. If options need to be serialized from JavaScript plugin calls, this could cause issues.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockexample-app/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (49)
.github/workflows/bump_version.yml(0 hunks).github/workflows/github-releases-to-discord.yml(0 hunks).github/workflows/test.yml(0 hunks).vscode/settings.json(0 hunks)CapgoCameraPreview.podspec(1 hunks)README.md(6 hunks)android/build.gradle(1 hunks)android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java(3 hunks)android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java(4 hunks)android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java(1 hunks)android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java(1 hunks)android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java(1 hunks)android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java(1 hunks)example-app/.browserslistrc(0 hunks)example-app/.editorconfig(0 hunks)example-app/.eslintrc.json(0 hunks)example-app/.vscode/extensions.json(0 hunks)example-app/.vscode/settings.json(0 hunks)example-app/README.md(1 hunks)example-app/android/capacitor.settings.gradle(1 hunks)example-app/ios/App/App.xcodeproj/project.pbxproj(2 hunks)example-app/ios/App/App.xcodeproj/xcshareddata/xcschemes/App.xcscheme(1 hunks)example-app/ios/App/App.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist(1 hunks)example-app/ios/App/Podfile(2 hunks)example-app/package.json(2 hunks)example-app/src/app/app.routes.ts(1 hunks)example-app/src/app/components/face-filter-demo/face-filter-demo.component.html(1 hunks)example-app/src/app/components/face-filter-demo/face-filter-demo.component.scss(1 hunks)example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts(1 hunks)example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts(1 hunks)example-app/src/app/components/media-viewer/media-viewer.component.ts(2 hunks)example-app/src/app/core/capacitor-camera-preview.service.ts(2 hunks)example-app/src/app/pages/camera-view/camera-view.page.html(1 hunks)example-app/src/app/pages/camera-view/camera-view.page.ts(3 hunks)example-app/src/app/pages/camera-view/keep(1 hunks)example-app/src/app/pages/face-detection-demo/face-detection-demo.page.html(1 hunks)example-app/src/app/pages/face-detection-demo/face-detection-demo.page.scss(1 hunks)example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts(1 hunks)example-app/src/app/services/face-alignment.service.ts(1 hunks)example-app/src/global.scss(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift(5 hunks)ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift(6 hunks)ios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift(1 hunks)src/definitions.ts(3 hunks)src/web.ts(4 hunks)
💤 Files with no reviewable changes (9)
- .github/workflows/test.yml
- example-app/.vscode/extensions.json
- .vscode/settings.json
- .github/workflows/github-releases-to-discord.yml
- example-app/.browserslistrc
- example-app/.editorconfig
- example-app/.vscode/settings.json
- example-app/.eslintrc.json
- .github/workflows/bump_version.yml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T15:37:50.891Z
Learnt from: Dalanir
Repo: Cap-go/capacitor-camera-preview PR: 217
File: ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift:1242-1246
Timestamp: 2025-08-29T15:37:50.891Z
Learning: In the iOS CameraController.swift handleTap method, the team wants to maintain the current behavior that forces exposureMode to .autoExpose and resets exposureTargetBias to 0.0 when tapping to focus, even though it overrides the user's current exposure mode setting.
Applied to files:
src/web.tsios/Sources/CapgoCameraPreviewPlugin/CameraController.swiftios/Sources/CapgoCameraPreviewPlugin/Plugin.swift
🧬 Code graph analysis (14)
example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (1)
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (1)
Component(10-427)
example-app/src/app/services/face-alignment.service.ts (2)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (1)
AlignmentResult(160-202)src/definitions.ts (1)
DetectedFace(456-503)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (9)
isRunning(103-105)hasSignificantMotion(407-424)getTrackingId(340-350)extractLandmarks(306-334)onAppBackground(429-432)onAppForeground(436-440)enableThermalThrottling(445-448)disableThermalThrottling(451-454)isThermalThrottlingActive(459-461)
src/web.ts (1)
src/definitions.ts (1)
ExposureMode(341-341)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (2)
ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift (1)
validate(73-148)example-app/src/app/services/face-alignment.service.ts (1)
AlignmentResult(16-25)
example-app/src/app/core/capacitor-camera-preview.service.ts (1)
src/definitions.ts (2)
FaceDetectionOptions(384-424)FaceDetectionResult(430-450)
ios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift (1)
src/index.ts (1)
CameraPreview(10-10)
ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (1)
android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1)
LightingResult(129-151)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (5)
onAppForeground(436-440)onAppBackground(429-432)isRunning(103-105)start(75-80)stop(84-98)
android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1)
ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (2)
analyzeLighting(45-78)calculateBrightness(82-109)
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (2)
example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (2)
Component(10-203)FaceGuidanceState(4-8)src/definitions.ts (2)
DetectedFace(456-503)FaceDetectionResult(430-450)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (2)
transformBounds(55-122)transformPoint(128-133)
example-app/src/app/pages/camera-view/camera-view.page.ts (3)
src/definitions.ts (2)
DetectedFace(456-503)FaceDetectionResult(430-450)example-app/src/app/core/capacitor-camera-preview.service.ts (1)
isRunning(77-79)ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
isRunning(103-105)
example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (2)
src/definitions.ts (2)
DetectedFace(456-503)FaceDetectionResult(430-450)src/index.ts (1)
CameraPreview(10-10)
🪛 markdownlint-cli2 (0.18.1)
example-app/README.md
126-126: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 RuboCop (1.81.7)
CapgoCameraPreview.podspec
[convention] 17-18: Extra empty line detected at block body end.
(Layout/EmptyLinesAroundBlockBody)
🪛 SwiftLint (0.57.0)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift
[Warning] 282-282: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
🔇 Additional comments (48)
example-app/package.json (2)
9-9: Verify build script alignment and platform constraints.The build script now explicitly invokes
bun run cap sync ios, which may cause issues on non-iOS platforms or in CI/CD environments that don't have iOS tooling available. Ensure this change works across all intended build environments (local development, CI/CD for Android/Web, etc.).
71-71: Clarify TypeScript downgrade from 5.9.0 to 5.8.0.The TypeScript version has been downgraded, which is atypical unless there's a compatibility constraint. Verify that this downgrade is intentional and compatible with Angular 19.2.6 and the face detection implementation introduced in this PR.
Regarding Angular 19.2 compatibility with TypeScript 5.8, the downgrade is technically compatible with the Angular version in use. However, the downgrade from ~5.9.0 to ~5.8.0 is atypical and prevents access to TypeScript 5.9.x bugfix updates. Please document why this downgrade was necessary.
example-app/src/app/components/media-viewer/media-viewer.component.ts (1)
30-37: Gesture lifecycle wiring via Angular hooks looks correctInitializing the gesture in
ngAfterViewInitand destroying it inngOnDestroyis the right pattern to avoid dangling listeners. No functional issues here; just ensure this component is not re-used outside a modal context wheredismiss()might behave differently.example-app/android/capacitor.settings.gradle (1)
1-21: LGTM - Auto-generated file updated correctly.These path updates reflect the regenerated bun package hashes after running
capacitor update. The changes are consistent across all Capacitor modules and align with the expected tooling behavior.example-app/src/app/components/face-filter-demo/face-filter-demo.component.html (2)
16-24: LGTM - Good track expression with fallback.The
track face.trackingId || $indexpattern provides a stable identity whentrackingIdis available while falling back to index when it's not. The face overlay structure with transform and size bindings looks well-designed for the AR filter use case.
95-111: LGTM - Face details section is well-guarded.Proper null checks using
@ifblocks for optional properties (rollAngle,yawAngle,smilingProbability). Theas facealias in line 98 is a clean pattern for accessing the first detected face.src/web.ts (2)
1269-1272: Good API surface design for unsupported platform.The asymmetric behavior is appropriate:
stopFaceDetection()throws because attempting an unsupported operation should fail explicitly, whileisFaceDetectionRunning()returnsfalseas a safe query that allows consumers to check availability without try/catch.Also applies to: 1279-1281
539-539: Clean pattern for suppressing unused parameter warnings.Using
void _paramis an idiomatic way to acknowledge intentionally unused parameters without triggering linter warnings.Also applies to: 1222-1224, 1239-1241
example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (1)
192-203: LGTM!The component implementation correctly uses Angular signals with an Input setter pattern. The protected access modifier appropriately exposes the signal to the template while preventing external access.
ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (2)
80-108: LGTM!The brightness calculation handles multiple pixel formats correctly with proper memory management (
deferfor unlock). The sampling strategy (every 8th pixel) is a good performance optimization, and the ITU-R BT.601 luminance coefficients in the BGRA path are correct.
45-78: LGTM!The lighting assessment logic is well-structured with clear thresholds and appropriate user feedback. The four-tier classification (too dark, too bright, acceptable, optimal) provides good granularity for guiding users.
src/definitions.ts (5)
376-424: LGTM!The
FaceDetectionOptionsinterface is well-documented with sensible defaults, platform annotations, and clear descriptions. The configuration options cover the key aspects of face detection performance tuning.
452-607: LGTM!The face detection types are comprehensive and well-documented. Platform-specific differences (e.g.,
pitchAnglebehavior, Android-only landmarks) are clearly annotated, which will help developers handle cross-platform differences appropriately.
609-684: Consider aligningFaceAlignmentFeedbackwith example app'sAlignmentResult.These types are defined in the plugin definitions but the example app's
FaceAlignmentServicedefines its ownAlignmentResultinterface with a slightly different structure (usesallFeedbackinstead ofallFeedback, which matches). The types could be consolidated to avoid potential confusion.That said, having these types available in the plugin definitions is useful for consumers who want to implement their own face alignment validation.
1037-1064: LGTM!The
faceDetectionevent listener is well-documented with a practical usage example. The API follows the established pattern for other listeners in the plugin.
1144-1203: LGTM!The face detection methods (
startFaceDetection,stopFaceDetection,isFaceDetectionRunning) are well-documented with clear examples and follow the existing plugin API patterns. The JSDoc includes helpful platform annotations and error conditions.example-app/README.md (2)
165-171: Verify iOS version requirement.The documentation states "iOS 13.0 or higher" for face detection, but the PR description mentions "iOS 11+" for the Vision Framework. Please verify the actual minimum iOS version required for the face detection features implemented.
1-29: LGTM!The README provides comprehensive documentation with clear prerequisites, installation steps, and feature descriptions. The structure is well-organized and will be helpful for developers setting up the example app.
example-app/src/app/services/face-alignment.service.ts (2)
115-133: Centering validation reports only one axis issue at a time.The
else-ifchain means if the face is off-center both horizontally and vertically, only the horizontal issue will be reported. This appears intentional (matches the Android implementation) to give users one clear instruction at a time, which is reasonable UX. Just noting this design decision.
55-145: LGTM!The
validateFacemethod is well-structured with clear validation logic for each aspect of face alignment. The aggregation of individual validations intoisValidand the feedback collection pattern align well with the Android counterpart (FaceAlignmentValidator.java), ensuring cross-platform consistency.ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift (1)
1-189: Clean implementation of face alignment validation.The validator is well-structured with sensible defaults and clear feedback messages. The logic correctly validates roll, pitch, yaw angles along with face size and centering. The
AlignmentResultstruct provides useful computed properties for accessing feedback.example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (1)
155-169: Clarify transform positioning approach.The
translate()CSS function uses percentages relative to the element's own dimensions, not the container. For face overlay positioning, usingleftandtopCSS properties (as percentages of the container) would be more intuitive. However, this may work correctly depending on how the template applies these transforms.example-app/src/app/components/face-filter-demo/face-filter-demo.component.scss (1)
1-142: Well-structured styling for the face filter overlay.The SCSS is clean with appropriate layering (
z-index), smooth transitions for face tracking, and responsive adjustments. The use ofpointer-events: noneon overlays ensures camera interaction isn't blocked.android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (1)
9-18: Cross-platform consistency confirmed.The default threshold values match the iOS implementation exactly, ensuring consistent behavior across platforms.
ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (2)
55-122: Coordinate transformation logic is well-implemented.The
transformBoundsmethod correctly handles aspect ratio differences and all three video gravity modes (resizeAspect, resizeAspectFill, resize). The orientation-based dimension swapping at line 65-67 is a good detail.
142-186: Orientation transformations are correctly implemented.The rotation formulas for each orientation case are mathematically sound. The method properly preserves width/height dimensions while rotating the coordinate origin.
example-app/src/global.scss (1)
37-64: Face detection overlay styles look good.The use of
!importantis justified here for overriding Ionic's default opaque backgrounds. The::part()selectors correctly target Ionic's shadow DOM parts to ensure the camera feed shows through.example-app/src/app/app.routes.ts (1)
30-36: Route addition follows existing patterns.The lazy-loaded route is correctly placed before the wildcard redirect and follows the established naming convention.
example-app/src/app/pages/camera-view/camera-view.page.html (1)
592-694: Face Detection UI card is well-structured.The controls provide comprehensive options for configuring face detection (performance mode, landmarks, tracking, max faces). The conditional rendering with
@ifblocks appropriately shows dependent controls only when detection is enabled.example-app/src/app/pages/face-detection-demo/face-detection-demo.page.html (1)
71-114: Stats sheet implementation looks comprehensive.The draggable stats sheet has good structure with performance metrics and per-face details. The dual touch/pointer event binding provides cross-platform support. Ensure the component code handles potential duplicate events on touch devices that support both event types.
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (2)
149-151: Face detection fields appropriately added.The fields are correctly scoped and placed with other camera-related state.
3540-3560: No action required. TheprocessImageProxymethod inFaceDetectionManagerproperly closes theImageProxyin all execution paths—including early returns for detection disabled, app in background, thermal throttling, frame skipping, and throttling conditions, as well as after processing completes (both success and error paths at lines 219 and 229).Likely an incorrect or invalid review comment.
android/build.gradle (1)
65-67: ML Kit Face Detection dependency is correctly configured.Version 16.1.7 is the latest stable release for the bundled model approach. The dependency placement alongside CameraX dependencies is appropriate.
android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java (3)
71-76: LGTM!Proper null-check before pausing face detection when app goes to background.
87-92: LGTM!Proper null-check before resuming face detection when app comes to foreground.
1998-2061: LGTM!The
startFaceDetectionmethod properly validates camera state, parses options, initializes the manager lazily, and wires up the listener for results. Good use of null-safe option retrieval.android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java (2)
58-130: LGTM!The coordinate transformation logic correctly handles rotation and aspect ratio differences. The step-by-step approach (normalize → rotate → scale/offset) is clear and maintainable.
138-144: Zero-size rect approach works but is slightly indirect.The
transformPointmethod creates a zero-width/height rect to reusetransformBounds. While functionally correct (the scaling preserves the origin), a more direct approach could be cleaner. However, this matches the iOS implementation pattern, so consistency is maintained.ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (4)
80-85: LGTM!Face detection methods properly registered in the plugin methods array.
546-548: LGTM!Proper lifecycle integration - face detection is paused/resumed with app state changes. The
appWillResignActivehandler ensures detection pauses when the app goes to background.Also applies to: 556-558, 561-565
839-853: LGTM!The preview layering logic correctly handles the
toBackflag, inserting the preview at index 0 with transparency when behind the web view, or adding it above with a high zPosition when in front.
2351-2385: LGTM!The
startFaceDetectionmethod properly validates camera state, parses all options with sensible defaults, creates the manager lazily, and sets up the delegate.example-app/src/app/core/capacitor-camera-preview.service.ts (2)
17-21: LGTM!Proper imports for face detection types and Capacitor listener handle.
371-433: LGTM!Well-documented face detection API with proper TypeScript types. The NgZone integration ensures Angular change detection is triggered correctly when face detection events arrive.
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (2)
113-115: FaceDetectionManager wiring into sample buffer pipeline looks solidUsing a nullable
faceDetectionManagerplusisRunning()checks keeps the hot path cheap when detection is disabled, and the first‑frame callback path correctly avoids double‑processing by returning early only when no capture is in progress. This integration reads clean and doesn’t change existing sample capture behavior.Also applies to: 2405-2424
591-620: Initial zoom guard is a good safety improvementThe new
guardaround selecting the active device insetInitialZoom(level:)prevents crashes when no camera is available and provides a clear log for that edge case. This is a safe and sensible change.android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1)
141-148: Exposure compensation formula may produce unexpected values.When
isTooDarkis true andbrightnessLevelis very low (e.g., 0.0), the formula(0.4 - 0.0) * 3.0 = 1.2yields a moderate adjustment. However, whenisTooBrightis true withbrightnessLevelnear 1.0, the formula(0.7 - 1.0) * 3.0 = -0.9gives a smaller absolute adjustment than expected for extreme brightness.The iOS counterpart uses different thresholds (0.5 and 0.6). Consider aligning the formulas or documenting the rationale for the asymmetry.
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
39-41: Comment doesn't match code:frameSkipCount = 1means processing every 2nd frame, not every frame.The comment on Line 40 says "Process every 2nd frame for better real-time tracking", but the default initialization is
frameSkipCount = 1. Given Line 122's logiccurrentFrameCount % (frameSkipCount + 1) == 0, this means every 2nd frame is processed. If the intent is to process every frame for best real-time tracking, setframeSkipCount = 0.
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java
Show resolved
Hide resolved
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java
Show resolved
Hide resolved
android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java
Show resolved
Hide resolved
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java
Show resolved
Hide resolved
android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java
Show resolved
Hide resolved
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift
Outdated
Show resolved
Hide resolved
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @jokabuyasina. * #268 (comment) The following files were modified: * `android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java` * `android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java` * `android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java` * `android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java` * `android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java` * `android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java` * `ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift` * `ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift` * `ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift` * `ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift` * `ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift` * `ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift`
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
…n tracking - Added FaceDetectionManager for iOS and Android - Added FaceAlignmentValidator for face positioning validation - Added FaceCoordinateTransformer for coordinate transformations - Added LightingQualityDetector for image quality assessment - Added face-detection demo page and components in example app - Added face alignment service - Updated camera preview plugin with face detection methods - Resolved conflicts with main branch updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
442-484: Resolution change affects all camera usage, not just face detection.The preset configuration now defaults to 720p for all camera sessions, regardless of whether face detection is active. This prioritizes face detection performance but may degrade quality for users who don't use face detection features.
Consider making the resolution choice conditional:
- Use 720p when
faceDetectionManager?.isRunning() == true- Otherwise, maintain the previous higher-quality defaults
🔎 Suggested conditional preset selection
- // For face detection, prioritize 720p for optimal performance - // Higher resolutions (1080p, 4K) slow down Vision framework without improving accuracy - var targetPreset: AVCaptureSession.Preset = .hd1280x720 + // Default to high quality; downgrade for face detection if needed + let useFaceDetectionPreset = faceDetectionManager?.isRunning() ?? false + var targetPreset: AVCaptureSession.Preset = useFaceDetectionPreset ? .hd1280x720 : .high if let aspectRatio = aspectRatio { switch aspectRatio { case "16:9": - // Use 720p for face detection - best balance of quality and performance - if captureSession.canSetSessionPreset(.hd1280x720) { + if useFaceDetectionPreset && captureSession.canSetSessionPreset(.hd1280x720) { targetPreset = .hd1280x720 - } else if captureSession.canSetSessionPreset(.hd1920x1080) { + } else if captureSession.canSetSessionPreset(.hd1920x1080) { targetPreset = .hd1920x1080 + } else if captureSession.canSetSessionPreset(.hd1280x720) { + targetPreset = .hd1280x720 } else if captureSession.canSetSessionPreset(.high) { targetPreset = .high }
♻️ Duplicate comments (1)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
519-536: Aspect-ratio update still broken due to incorrect superlayer cast.As flagged in previous reviews, the cast
previewLayer.superlayer as? UIViewwill always fail becausesuperlayeris aCALayer, not aUIView. Runtime aspect-ratio changes remain non-functional.🔎 Fix: Use superlayer bounds directly
- DispatchQueue.main.async { [weak self, aspectRatio] in - guard let self = self, let previewLayer = self.previewLayer else { return } - - // Reconfigure preview layer frame based on aspect ratio - if let view = previewLayer.superlayer as? UIView { - if let aspectRatio = aspectRatio { - let frame = self.calculateAspectRatioFrame(for: aspectRatio, in: view.bounds) - previewLayer.frame = frame - previewLayer.videoGravity = .resizeAspectFill - } else { - previewLayer.frame = view.bounds - previewLayer.videoGravity = .resizeAspect - } - - // Keep grid overlay in sync - self.gridOverlayView?.frame = previewLayer.frame - } - } + DispatchQueue.main.async { [weak self, aspectRatio] in + guard let self = self, + let previewLayer = self.previewLayer, + let superlayer = previewLayer.superlayer else { return } + + let bounds = superlayer.bounds + if let aspectRatio = aspectRatio { + let frame = self.calculateAspectRatioFrame(for: aspectRatio, in: bounds) + previewLayer.frame = frame + previewLayer.videoGravity = .resizeAspectFill + } else { + previewLayer.frame = bounds + previewLayer.videoGravity = .resizeAspect + } + + // Keep grid overlay in sync + self.gridOverlayView?.frame = previewLayer.frame + }
🧹 Nitpick comments (3)
CapgoCameraPreview.podspec (1)
17-17: Remove extra blank line at block body end.RuboCop detected an extra empty line at the end of the block body. Consider removing line 17 to align with Ruby style conventions.
🔎 Proposed fix
s.swift_version = '5.1' - endsrc/web.ts (1)
1260-1263: UseFaceDetectionOptionstype instead ofany.The parameter is typed as
any, which bypasses TypeScript's type safety. According to the AI summary,FaceDetectionOptionsis defined insrc/definitions.ts. Consider using the proper type for better type checking and IDE support.🔎 Proposed fix
- async startFaceDetection(_config?: any): Promise<void> { + async startFaceDetection(_config?: FaceDetectionOptions): Promise<void> { void _config; throw new Error('Face detection is not supported on the web platform'); }Ensure
FaceDetectionOptionsis imported at the top of the file from./definitions.ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
2418-2420: Consider propagating detection failures to JavaScript.The
faceDetectionDidFailmethod currently only logs errors locally. Consider notifying the JavaScript layer so the application can respond to detection failures (e.g., show user feedback, retry logic, or adjust UI).🔎 Proposed enhancement to propagate failures
func faceDetectionDidFail(error: Error) { print("[FaceDetection] Detection failed: \(error.localizedDescription)") + notifyListeners("faceDetectionError", data: [ + "message": error.localizedDescription + ]) }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockexample-app/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (7)
CapgoCameraPreview.podspec(1 hunks)android/build.gradle(1 hunks)example-app/package.json(2 hunks)ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift(5 hunks)ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift(7 hunks)package.json(4 hunks)src/web.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- example-app/package.json
- android/build.gradle
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T15:37:50.891Z
Learnt from: Dalanir
Repo: Cap-go/capacitor-camera-preview PR: 217
File: ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift:1242-1246
Timestamp: 2025-08-29T15:37:50.891Z
Learning: In the iOS CameraController.swift handleTap method, the team wants to maintain the current behavior that forces exposureMode to .autoExpose and resets exposureTargetBias to 0.0 when tapping to focus, even though it overrides the user's current exposure mode setting.
Applied to files:
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swiftios/Sources/CapgoCameraPreviewPlugin/CameraController.swift
🧬 Code graph analysis (2)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (2)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (4)
onAppForeground(436-440)onAppBackground(429-432)isRunning(103-105)start(75-80)src/definitions.ts (1)
FaceDetectionOptions(384-424)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (2)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
isRunning(1535-1538)ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (2)
isRunning(103-105)processSampleBuffer(111-178)
🪛 RuboCop (1.81.7)
CapgoCameraPreview.podspec
[convention] 17-18: Extra empty line detected at block body end.
(Layout/EmptyLinesAroundBlockBody)
🔇 Additional comments (8)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (5)
37-37: LGTM: Version bump aligns with new face detection features.The version increment from 8.0.3 to 8.0.5 appropriately reflects the addition of face detection capabilities in this PR.
80-84: LGTM: Face detection APIs properly registered.The three new face detection methods are correctly declared with proper return types.
546-547: LGTM: Lifecycle coordination properly implemented.Face detection state is correctly synchronized with app lifecycle events:
- Resume on
appDidBecomeActiveandappWillEnterForeground- Pause on
appWillResignActiveSafe optional chaining prevents crashes if face detection is not active.
Also applies to: 556-559, 561-566
839-852: LGTM: Preview layering logic is clear and correct.The explicit handling of
toBackmode ensures proper z-ordering:
toBack=true: Preview behind web content withzPosition=-1000and disabled user interactiontoBack=false: Preview above web content withzPosition=1000This provides predictable layering behavior for different use cases.
879-879: LGTM: Observer registration is correct.The
appWillResignActivenotification observer is properly registered and paired with its handler.ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (3)
113-114: LGTM: Face detection manager property is appropriately declared.The optional
faceDetectionManagerallows lazy initialization when face detection is requested.
592-620: LGTM: Initial zoom logic is correct.The refactored zoom initialization properly handles the iOS 18+ display multiplier and validates zoom bounds.
2413-2424: LGTM: Face detection frame processing is correctly integrated.Sample buffers are passed to the face detection manager when active, using safe optional chaining to prevent crashes when face detection is not running.
- Extract preview creation into createConfiguredPreview() helper method - Apply ResolutionSelector with 720p target and aspect ratio strategy - Set target rotation and frame rate (20-25 FPS) consistently - Use helper in both rebindCameraWithAnalysis() and rebindCameraWithoutAnalysis() - Prevents preview quality/resolution changes when enabling/disabling face detection
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (1)
765-770: Frame rate limit still applies unconditionally.As noted in previous review comments, the 20-25 FPS target is applied to all preview sessions regardless of whether face detection is enabled. This limitation will affect preview smoothness for users who don't need face detection. The same unconditional limit also appears in the new
createConfiguredPreview()helper at line 3605.Consider making the frame rate limit conditional:
🔎 Proposed fix
int rotation = previewView != null && previewView.getDisplay() != null ? previewView.getDisplay().getRotation() : android.view.Surface.ROTATION_0; -// Configure preview with optimized frame rate for face detection (20-25 FPS) -Preview preview = new Preview.Builder() - .setResolutionSelector(resolutionSelector) - .setTargetRotation(rotation) - .setTargetFrameRate(new Range<>(20, 25)) // 20-25 FPS for balanced performance - .build(); +Preview.Builder previewBuilder = new Preview.Builder() + .setResolutionSelector(resolutionSelector) + .setTargetRotation(rotation); + +// Only limit frame rate when face detection is active +if (faceDetectionManager != null) { + previewBuilder.setTargetFrameRate(new Range<>(20, 25)); +} + +Preview preview = previewBuilder.build();Apply the same pattern to
createConfiguredPreview()at lines 3601-3606.package.json (2)
46-46: Fix the prettier script to use the correct command.The
prettier-pretty-checkcommand does not exist (the package is not available on npm). This will cause thelintandfmtscripts (lines 43-44) to fail when they invoke this script.🔎 Proposed fix
Change the script back to use
prettier:- "prettier": "prettier-pretty-check \"**/*.{css,html,ts,js,java}\" --plugin=prettier-plugin-java", + "prettier": "prettier \"**/*.{css,html,ts,js,java}\" --plugin=prettier-plugin-java",The
--checkand--writeflags passed from thelintandfmtscripts will work correctly with the standardprettiercommand.
71-71: Remove the non-existent package from devDependencies.This is the same issue flagged in previous reviews: the
prettier-pretty-checkpackage (version ^0.2.0) does not exist on npm and will causenpm installto fail.
🧹 Nitpick comments (4)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (4)
149-151: Consider thread-safety for face detection fields.The
faceDetectionManagerfield is accessed from both the main thread (inenableFaceDetection/disableFaceDetection) and the camera executor thread (in the analyzer lambda at line 3550). Without synchronization, this could lead to visibility issues or race conditions if face detection is toggled while frames are being processed.Consider using
volatileor proper synchronization:🔎 Suggested fix
// Face Detection -private FaceDetectionManager faceDetectionManager; +private volatile FaceDetectionManager faceDetectionManager; private androidx.camera.core.ImageAnalysis imageAnalysis;
3562-3569: Consider nullingimageAnalysiswhen disabling face detection.The
disableFaceDetection()method clears the analyzer but doesn't setimageAnalysisto null. This leaves the ImageAnalysis use case allocated even when face detection is disabled, which may hold unnecessary resources.🔎 Suggested improvement
public void disableFaceDetection() { this.faceDetectionManager = null; if (imageAnalysis != null) { imageAnalysis.clearAnalyzer(); + imageAnalysis = null; } // Rebind camera without ImageAnalysis rebindCameraWithoutAnalysis(); }
3609-3641: Add null check forpreviewViewbefore accessing surface provider.Both
rebindCameraWithAnalysis()andrebindCameraWithoutAnalysis()callpreviewView.getSurfaceProvider()at lines 3620 and 3654 without explicitly checking ifpreviewViewis non-null. While theisRunningcheck likely impliespreviewViewexists, adding an explicit null check would make the code more defensive and prevent potential NPEs if the lifecycle state is inconsistent.🔎 Suggested fix
private void rebindCameraWithAnalysis() { - if (cameraProvider == null || !isRunning) { + if (cameraProvider == null || !isRunning || previewView == null) { return; } try { // Unbind all cameraProvider.unbindAll(); // Rebuild preview with proper configuration Preview preview = createConfiguredPreview(); preview.setSurfaceProvider(previewView.getSurfaceProvider()); // ...Apply the same fix to
rebindCameraWithoutAnalysis().Also applies to: 3643-3672
3614-3640: Consider propagating rebind errors to the listener.Both rebind methods catch exceptions and only log them without notifying the
listener. If rebinding fails (e.g., due to camera access issues), the app remains in an inconsistent state with face detection enabled/disabled but the camera not properly rebound. Consider callinglistener.onCameraStartError()to inform the app layer.🔎 Suggested improvement
} catch (Exception e) { Log.e(TAG, "Failed to rebind camera with analysis", e); + if (listener != null) { + listener.onCameraStartError("Failed to rebind camera with face detection: " + e.getMessage()); + } } }Apply similar logic to both rebind methods.
Also applies to: 3648-3671
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java(4 hunks)package.json(4 hunks)
🔇 Additional comments (2)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (1)
3571-3607: Well done addressing the previous review feedback.The new
createConfiguredPreview()helper successfully addresses the major issue raised in previous reviews about losing preview configuration (ResolutionSelector, TargetRotation, TargetFrameRate) when rebinding the camera for face detection. This helper ensures consistent preview behavior across different binding states.package.json (1)
13-13: Good addition for package discoverability.Adding the homepage field improves the package metadata and helps users find documentation.
| "prettier-plugin-organize-imports": "^4.3.0", | ||
| "prettier-plugin-packagejson": "^2.5.20", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for prettier configuration files
echo "=== Looking for prettier configuration files ==="
find . -maxdepth 2 -type f \( -name ".prettierrc*" -o -name "prettier.config.*" \) 2>/dev/null
echo ""
echo "=== Checking package.json for prettier config and scripts ==="
if [ -f package.json ]; then
cat package.json | jq '.prettier, .scripts.prettier' 2>/dev/null || echo "No jq, trying grep:"
grep -A 20 '"prettier"' package.json 2>/dev/null | head -30
fi
echo ""
echo "=== Checking if @ionic/prettier-config is referenced ==="
grep -r "@ionic/prettier-config" . --include="*.json" --include="*.js" -n 2>/dev/null | head -20
echo ""
echo "=== Searching for plugin references in prettier config ==="
grep -r "plugins" . -n --include=".prettierrc*" --include="prettier.config.*" 2>/dev/null
echo ""
echo "=== Looking for organize-imports or packagejson plugin mentions ==="
grep -r "organize-imports\|packagejson" . -n --include=".prettierrc*" --include="prettier.config.*" --include="package.json" 2>/dev/nullRepository: Cap-go/capacitor-camera-preview
Length of output: 2292
Configure the newly added prettier plugins or remove them as unused dependencies.
The prettier-plugin-organize-imports and prettier-plugin-packagejson packages have been added to devDependencies but are not configured in the project's prettier setup. Prettier version 3 requires explicit plugin configuration, and these plugins are not declared in the prettier configuration (which uses @ionic/prettier-config), the CLI script (which only specifies --plugin=prettier-plugin-java), or any local .prettierrc/prettier.config.js file. Either add them to the prettier configuration or remove them as unused dependencies.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Add detectClassifications option to enable smiling and eye open detection - Enable CLASSIFICATION_MODE_ALL when detectClassifications is true - Include smilingProbability, leftEyeOpenProbability, rightEyeOpenProbability in JSON output - Clamp frameSkipCount to non-negative value to prevent division by zero - Use Math.max(0, frameSkipCount) before computing divisor for modulo operation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (3)
166-167: Data race on frame dimensions (still unresolved from previous review).
currentFrameWidthandcurrentFrameHeightare written ondetectionQueuehere but read onresultQueue(lines 248-249) without synchronization. This can result in stale or torn values.As suggested in the previous review, pass
frameWidthandframeHeightdirectly through the completion handler closure to avoid shared mutable state.🔎 Proposed fix
Update the detection request to capture frame dimensions:
// Process asynchronously on background queue detectionQueue.async { [weak self] in guard let self = self, !self.isCancelled else { self?.decrementProcessingCount() return } - // Store frame dimensions for coordinate normalization - self.currentFrameWidth = frameWidth - self.currentFrameHeight = frameHeight - // Create face detection request - let request = self.createFaceDetectionRequest() + let request = self.createFaceDetectionRequest(frameWidth: frameWidth, frameHeight: frameHeight)Then modify
createFaceDetectionRequestto accept and capture these parameters:private func createFaceDetectionRequest(frameWidth: CGFloat, frameHeight: CGFloat) -> VNDetectFaceLandmarksRequest { let request = VNDetectFaceLandmarksRequest { [weak self] request, error in // ... existing error handling ... self.handleDetectionResults(request.results as? [VNFaceObservation] ?? [], frameWidth: frameWidth, frameHeight: frameHeight) } // ... rest of configuration ... }Update
handleDetectionResultssignature to accept dimensions as parameters and remove references toself.currentFrameWidth/Height.
197-200: Delegate callback not dispatched to main thread.The
faceDetectionDidFailcall on line 199 is invoked directly from the Vision completion handler (which runs ondetectionQueue), while the same delegate method at lines 178-180 is properly dispatched to main thread. This inconsistency can cause issues for delegates expecting main-thread callbacks.🔎 Proposed fix
if let error = error { print("[FaceDetection] Detection error: \(error)") - self.delegate?.faceDetectionDidFail(error: error) + if !self.isCancelled { + DispatchQueue.main.async { + self.delegate?.faceDetectionDidFail(error: error) + } + } return }
334-336: Mouth landmarks produce duplicate coordinates (still unresolved from previous review).All three mouth landmarks (
mouthLeft,mouthRight,mouthBottom) callgetPoint(from: faceLandmarks.outerLips), which returns the first point from the same region. This results in identical coordinates for all three landmarks, which is incorrect.Consider extracting distinct points from the
outerLips.normalizedPointsarray:🔎 Proposed fix
+ // Helper to get specific point from landmark region by index + func getPointAt(from region: VNFaceLandmarkRegion2D?, index: Int) -> Point? { + guard let region = region, region.pointCount > index else { return nil } + return normalizePoint(region.normalizedPoints[index]) + } + return FaceLandmarks( leftEye: getPoint(from: faceLandmarks.leftEye), rightEye: getPoint(from: faceLandmarks.rightEye), noseBase: getPoint(from: faceLandmarks.nose), - mouthLeft: getPoint(from: faceLandmarks.outerLips), // Approximate - mouthRight: getPoint(from: faceLandmarks.outerLips), // Approximate - mouthBottom: getPoint(from: faceLandmarks.outerLips), // Approximate + mouthLeft: getPointAt(from: faceLandmarks.outerLips, index: 0), + mouthRight: getPointAt(from: faceLandmarks.outerLips, index: faceLandmarks.outerLips?.pointCount ?? 0 > 6 ? 6 : 0), + mouthBottom: getPointAt(from: faceLandmarks.outerLips, index: faceLandmarks.outerLips?.pointCount ?? 0 > 3 ? 3 : 0), leftEar: nil, // Not reliably detected by Vision rightEar: nil, leftCheek: nil, // Not available in Vision rightCheek: nil )Alternatively, if distinct mouth points cannot be reliably extracted, set these to
niland document the limitation.
🧹 Nitpick comments (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
290-290: Redundant nil initialization.Optional variables in Swift are automatically initialized to
nil, making the explicit assignment redundant.🔎 Proposed fix
- var landmarks: FaceLandmarks? = nil + var landmarks: FaceLandmarks?Based on learnings, static analysis tools can help identify such style issues.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift(1 hunks)
🧰 Additional context used
🪛 SwiftLint (0.57.0)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift
[Warning] 290-290: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
🔇 Additional comments (5)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (5)
1-68: LGTM!The protocol definition, property declarations, and initialization are well-structured. The use of dedicated queues for detection and result processing shows good concurrency design.
223-260: Result handling logic is well-structured.The filtering by max faces and minimum face size, along with proper delegate dispatch to main thread (lines 254-258), demonstrates good design. The data race issue with frame dimensions is covered separately.
364-406: LGTM!The helper methods are well-implemented. The pitch estimation fallback for iOS < 15 is a thoughtful addition, and
decrementProcessingCountproperly uses the lock for thread safety.
415-480: LGTM!Motion detection, app lifecycle, and thermal management methods are well-implemented. The frame skipping and motion detection features provide good power management capabilities.
485-536: LGTM!The data models are well-structured and properly use
Codablefor serialization. The structure clearly represents the face detection results with appropriate optional fields.
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift
Outdated
Show resolved
Hide resolved
- Remove unconditional call to configureFrameRateForFaceDetection() in configureSessionPreset() - Add state variables to save device's original activeFormat and frame durations - Update configureFrameRateForFaceDetection() to save original settings before modifying - Add restoreOriginalFrameRate() method to restore saved settings and clear state - Call configureFrameRateForFaceDetection() when face detection starts - Call restoreOriginalFrameRate() when face detection stops - Prevents 25 FPS cap from affecting camera when face detection is not in use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)
119-141: Consider shutting down the executor instopDetection().The
ExecutorServicecreated in the constructor (line 74) is never explicitly shut down. IfFaceDetectionManagerinstances are short-lived or recreated, this will leak threads. Consider addingexecutorService.shutdown()after waiting for in-flight tasks to complete.🔎 Suggested enhancement
if (faceDetector != null) { faceDetector.close(); faceDetector = null; } + + // Shutdown executor to prevent thread leaks + if (executorService != null && !executorService.isShutdown()) { + executorService.shutdown(); + } Log.d(TAG, "Face detection stopped (" + processingCount.get() + " tasks still processing)"); }Note: If you shutdown the executor, you'll need to recreate it in
startDetection()to allow restarting detection after stopping.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (8)
hasSignificantMotion(415-432)getTrackingId(348-358)extractLandmarks(314-342)onAppBackground(437-440)onAppForeground(444-448)enableThermalThrottling(453-456)disableThermalThrottling(459-462)isThermalThrottlingActive(467-469)
🔇 Additional comments (2)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (2)
104-106: Previous review feedback addressed successfully.The classification support and frame skip clamping issues from the previous review have been properly fixed:
detectClassificationsis now parsed from options (line 256) and used to configure the detector (lines 104-106)- Classification probabilities are included in the JSON output when enabled (lines 346-359)
frameSkipCountis clamped to prevent division by zero (line 178)Also applies to: 178-179, 256-256, 346-359
212-213: Use of experimentalImageProxy.getImage()API acknowledged.The code uses
@ExperimentalGetImageto access the experimentalImageProxy.getImage()method. This is the standard pattern endorsed by ML Kit documentation for integrating with CameraX. While the API is marked experimental, best practice is to avoid directly closing the Image and instead callImageProxy.close()on the proxy from which the Image was retrieved.
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java
Outdated
Show resolved
Hide resolved
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java
Show resolved
Hide resolved
- Call faceDetectionManager?.stop() to properly stop detection - Set faceDetectionManager to nil to release resources - Prevents resource leaks when camera is stopped or cleaned up
- Replace resultQueue.sync with resultQueue.async when clearing tracking state - Prevents deadlock if stop() is called from main thread while resultQueue has work dispatching to main - Allows stop() to return immediately without blocking the calling thread
…etectionManager.java Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Convert showLandmarks signal to use getter/setter pattern for ngModel compatibility - Rename internal signal to showLandmarksSignal to avoid confusion - Add getter that reads from signal and setter that updates signal - Update template to use showLandmarksSignal() in @if condition - Fixes two-way binding [(ngModel)] which expects a plain property, not a signal
- Add smilingProbability?: number for smiling detection - Add leftEyeOpenProbability?: number for left eye state - Add rightEyeOpenProbability?: number for right eye state - All properties are optional and only available when detectClassifications is enabled - Resolves TypeScript errors in face-filter-demo component - Values range from 0.0 (not present) to 1.0 (definitely present)
- Pass frame dimensions through closures to avoid data race - Dispatch faceDetectionDidFail callback to main thread - Extract distinct mouth landmark points (left, right, bottom) instead of duplicates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
489-539: Aspect‑ratio preview update now correctly uses superlayer boundsThe revised
updateAspectRatioblock now bases the preview frame onpreviewLayer.superlayer?.boundsinstead of attempting to castsuperlayertoUIView, so runtime aspect-ratio changes will actually resize the preview and keepgridOverlayViewin sync. This fixes the earlier no-op behavior and looks solid.android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)
53-62: Avoid storing closedImageProxyinstances for motion detection
hasSignificantMotion(...)still keeps a reference to anImageProxyinlastProcessedImageand later callslastProcessedImage.getImage():
- Each frame’s
ImageProxyis eventually closed in the success/failure listeners inprocessImageProxy(...).- On subsequent calls,
lastProcessedImagewill often refer to a proxy that has already been closed, solastProcessedImage.getImage()can returnnullor fail, and the code assumes it’s usable (only a null check, not a “closed” check).- This is the same lifecycle problem previously flagged: storing
ImageProxyacross frames is unsafe once the camera pipeline is allowed to close/recycle buffers.Given that this motion detector only needs a coarse signal, you can simplify and make it safe by tracking timestamps only (no retained
ImageProxy):Suggested timestamp‑only motion detection
- private ImageProxy lastProcessedImage = null; + // Track timestamp of last processed frame for lightweight motion detection + private long lastProcessedTimestamp = 0; @@ - private boolean hasSignificantMotion(ImageProxy currentImage) { - if (lastProcessedImage == null) { - lastProcessedImage = currentImage; - return true; // Always process first frame - } - - // Simple motion detection: compare Y plane brightness - try { - @androidx.camera.core.ExperimentalGetImage - android.media.Image current = currentImage.getImage(); - @androidx.camera.core.ExperimentalGetImage - android.media.Image last = lastProcessedImage.getImage(); - - if (current == null || last == null) { - return true; - } - - // Sample center region brightness - int width = currentImage.getWidth(); - int height = currentImage.getHeight(); - int centerX = width / 2; - int centerY = height / 2; - int sampleSize = Math.min(width, height) / 10; // 10% of smaller dimension - - // For simplicity, assume motion if images are different objects - // More sophisticated: compare pixel data in region - boolean hasMotion = (current.getTimestamp() != last.getTimestamp()); - - lastProcessedImage = currentImage; - return hasMotion; - } catch (Exception e) { - Log.w(TAG, "Motion detection failed, assuming motion", e); - return true; // Fail-safe: process frame - } - } + private boolean hasSignificantMotion(ImageProxy currentImage) { + @androidx.camera.core.ExperimentalGetImage + android.media.Image current = currentImage.getImage(); + if (current == null) { + // If we can't read an image, err on the side of processing + return true; + } + + long currentTimestamp = current.getTimestamp(); + if (lastProcessedTimestamp == 0) { + lastProcessedTimestamp = currentTimestamp; + return true; // Always process first frame + } + + boolean hasMotion = (currentTimestamp != lastProcessedTimestamp); + lastProcessedTimestamp = currentTimestamp; + return hasMotion; + }Also applies to: 420-461
🧹 Nitpick comments (3)
example-app/src/app/pages/camera-view/camera-view.page.ts (2)
677-684: Consider removing unsupported detectClassifications option.Line 681 passes
detectClassifications: true, but according to past reviews, classification fields (smilingProbability,leftEyeOpenProbability,rightEyeOpenProbability) are not populated by the native implementations. While this won't cause errors, it may mislead future maintainers into expecting these fields to be available.Based on learnings, classification fields are not currently supported in the native Android and iOS implementations.
🔎 Proposed refactor
await this.#cameraViewService.startFaceDetection({ performanceMode: this.faceDetectionPerformance(), trackingEnabled: this.trackFaces(), detectLandmarks: this.detectLandmarks(), - detectClassifications: true, maxFaces: this.maxFaces(), minFaceSize: 0.15, });
695-707: Consider adding a defensive check before cleanup.If
stopFaceDetection()is called when face detection was never started, callingremoveFaceDetectionListener()on line 698 might fail or log errors. While typical usage flows throughtoggleFaceDetection()which checksfaceDetectionRunning(), direct API calls could hit this edge case.🔎 Proposed refactor
protected async stopFaceDetection(): Promise<void> { try { + if (!this.faceDetectionRunning()) { + return; + } await this.#cameraViewService.stopFaceDetection(); await this.#cameraViewService.removeFaceDetectionListener(); this.faceDetectionRunning.set(false); this.detectedFaces.set([]); const results = this.testResults() + '\n✓ Face detection stopped'; this.testResults.set(results); } catch (error) { const results = this.testResults() + `\n✗ Stop face detection failed: ${String(error)}`; this.testResults.set(results); } }ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
293-293: Remove redundant nil initialization.The explicit
= nilis unnecessary for optional variables in Swift, as they default tonil.🔎 Proposed fix
- var landmarks: FaceLandmarks? = nil + var landmarks: FaceLandmarks?
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java(1 hunks)example-app/src/app/components/face-filter-demo/face-filter-demo.component.html(1 hunks)example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts(1 hunks)example-app/src/app/pages/camera-view/camera-view.page.ts(7 hunks)example-app/src/app/services/face-alignment.service.ts(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift(5 hunks)ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift(1 hunks)ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift(7 hunks)src/definitions.ts(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- example-app/src/app/components/face-filter-demo/face-filter-demo.component.html
- example-app/src/app/services/face-alignment.service.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T15:37:50.891Z
Learnt from: Dalanir
Repo: Cap-go/capacitor-camera-preview PR: 217
File: ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift:1242-1246
Timestamp: 2025-08-29T15:37:50.891Z
Learning: In the iOS CameraController.swift handleTap method, the team wants to maintain the current behavior that forces exposureMode to .autoExpose and resets exposureTargetBias to 0.0 when tapping to focus, even though it overrides the user's current exposure mode setting.
Applied to files:
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swiftios/Sources/CapgoCameraPreviewPlugin/CameraController.swift
🧬 Code graph analysis (3)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (3)
stop(80-104)isRunning(109-111)processSampleBuffer(117-180)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (2)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (2)
faceDetectionDidUpdate(2411-2423)faceDetectionDidFail(2425-2427)src/definitions.ts (6)
FaceDetectionOptions(384-424)FaceDetectionResult(430-450)FaceBounds(537-557)DetectedFace(456-530)Point(624-634)FaceLandmarks(564-618)
example-app/src/app/pages/camera-view/camera-view.page.ts (4)
src/definitions.ts (2)
DetectedFace(456-530)FaceDetectionResult(430-450)ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
isRunning(109-111)example-app/src/app/core/capacitor-camera-preview.service.ts (1)
isRunning(77-79)src/web.ts (1)
isRunning(721-724)
🪛 SwiftLint (0.57.0)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift
[Warning] 293-293: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
🔇 Additional comments (25)
example-app/src/app/pages/camera-view/camera-view.page.ts (5)
42-43: LGTM: Face detection imports added correctly.The imports for
FaceDetectionResultandDetectedFacealign with the new face detection methods, andRouterLinkis properly registered in the component's imports array.Also applies to: 48-48, 56-56
133-140: LGTM: Face detection state is well-structured.The face detection configuration and state fields use appropriate defaults and Angular signals for reactive updates.
441-441: Good fix: Error logging now produces readable messages.Changing from
${error}to${String(error)}ensures Error objects are converted to meaningful strings instead of "[object Object]".Also applies to: 455-455, 468-468, 481-481, 743-743
660-666: LGTM: Toggle logic is clean and correct.The method appropriately checks the running state and calls the corresponding start/stop method.
709-753: LGTM: Test and update methods are well-implemented.The
testFaceDetection()method provides comprehensive diagnostics with clear user instructions, and displays face tracking data (position, size, angles) appropriately. TheupdateFaceDetectionResults()helper provides optional debug logging. Note that classification fields mentioned in past reviews are no longer accessed, addressing that concern.ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
2451-2470: Face‑detection hook in captureOutput integrates cleanly with existing flowThe delegate now forwards frames to
faceDetectionManager.processSampleBuffer(_:)only when a manager exists andisRunning()is true, and it preserves the existing first‑frame callback/early‑return behavior. This keeps face detection orthogonal to sample capture and should not regress the existingcaptureSamplepath.ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
35-85: Face‑detection plugin bridge and lifecycle wiring look consistent
- The new plugin methods (
startFaceDetection,stopFaceDetection,isFaceDetectionRunning,getPluginVersion) are correctly exposed inpluginMethodsand map cleanly to the native logic inCameraController/FaceDetectionManager.- App lifecycle handlers (
appDidBecomeActive,appWillEnterForeground,appWillResignActive) forwarding tofaceDetectionManager.onAppForeground/onAppBackgroundgive iOS parity with the Android manager’s background/thermal throttling semantics.completeStartCamera’s z‑position handling fortoBackvs foreground preview is clear and keeps the transparency helpers intact.- The
FaceDetectionDelegateextension encodesFaceDetectionResultand forwards it over thefaceDetectionevent, matching the new TypeScript listener overload.Aside from the frame‑rate restoration caveats noted in
CameraController, this bridge layer looks coherent and ready to use.Also applies to: 541-566, 835-909, 2352-2405, 2410-2427
src/definitions.ts (1)
456-530: Face‑detection typings and listener/method surface are well‑structured
- The
DetectedFace/FaceBounds/FaceLandmarkstypes with normalized coordinates and Euler angles align with the JSON produced on Android and what the iOS delegate forwards, which is ideal for AR overlays.- Separate
FaceAlignmentFeedbackandLightingQualityFeedbacktypes give room for higher‑level UX helpers without bloating the coreFaceDetectionResult.- The new
addListener('faceDetection', ...)overload and the triostartFaceDetection,stopFaceDetection,isFaceDetectionRunningonCameraPreviewPluginmatch the native API surface and are clearly documented with usage examples.Once
FaceDetectionOptionsis extended to coverdetectClassifications(and any other intended public flags), the TS side looks complete.Also applies to: 636-711, 1064-1091, 1172-1231
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (11)
1-64: LGTM! Well-structured class setup.The protocol definition, properties, and initialization are well-organized. The use of separate queues for detection and result handling with proper QoS settings is appropriate for real-time face tracking.
68-111: LGTM! Lifecycle methods properly handle synchronization.The
start(),stop(), andisRunning()methods correctly manage detection state. Thestop()method properly synchronizes access toprocessingCountwith locks and uses async dispatch toresultQueueto avoid potential deadlocks.
117-180: LGTM! Frame processing pipeline is robust.The
processSampleBuffermethod correctly handles frame skipping, thermal throttling, and app lifecycle states. Frame dimensions are captured locally to avoid race conditions, and the asynchronous processing with proper locking aroundprocessingCountensures thread safety.
190-218: LGTM! Vision request configuration is optimized for real-time performance.The
createFaceDetectionRequestmethod properly configures the Vision request with appropriate revision settings and GPU acceleration. Frame dimensions are passed through the closure tohandleDetectionResults, avoiding shared mutable state.
226-263: LGTM! Result handling is thread-safe and properly filtered.The
handleDetectionResultsmethod correctly processes face observations on theresultQueue, applies filtering for face count and size limits, and dispatches delegate callbacks to the main thread. Using passed parameters for frame dimensions avoids race conditions.
269-309: LGTM! Face conversion handles coordinate systems correctly.The
convertToDetectedFacemethod properly converts Vision's bottom-left coordinate system to top-left, extracts rotation angles with appropriate iOS version checks, and includes a fallback for pitch estimation on older iOS versions.
317-351: LGTM! Landmark extraction is well-implemented.The
extractLandmarksmethod correctly transforms Vision's landmark coordinates to normalized screen coordinates. The use of specific indices for mouth corners (fixed from the previous review) provides distinct points, and the documentation of Vision framework limitations for ears and cheeks is helpful.
357-367: LGTM! Face tracking ID management is thread-safe.The
getTrackingIdmethod uses Vision's UUID-based face tracking to maintain consistent IDs across frames. All accesses tofaceTrackingMapoccur on the serialresultQueue, ensuring thread safety.
373-441: LGTM! Helper methods are well-implemented.The utility methods are straightforward and correct. The
estimatePitchFromLandmarksprovides a reasonable fallback for iOS < 15,decrementProcessingCountproperly uses locking, and the motion detection is appropriate for the power management use case.
446-489: LGTM! Power and thermal management is straightforward.The lifecycle hooks and power management methods provide appropriate controls for optimizing battery usage and handling thermal conditions. The logging statements are helpful for debugging.
494-545: LGTM! Data models align with cross-platform type definitions.The Swift data models correctly mirror the TypeScript interfaces defined in
src/definitions.ts. TheCodableconformance enables seamless JSON serialization for the native-to-JS bridge.example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (6)
1-42: LGTM! Component setup is clean and follows Angular standalone best practices.The imports and component decorator are well-organized. The standalone component pattern with explicit imports is the modern Angular approach.
43-59: LGTM! Reactive state management with signals is well-implemented.The use of Angular signals for reactive state and the getter/setter pattern to enable two-way binding with
ngModelis a good approach. The loose typing offaceDetectionListenerasanyis acceptable for demo code.
61-94: LGTM! Lifecycle management is correct.The component properly requests permissions, initializes the camera, and starts face detection in
ngOnInit. ThengOnDestroycleanup correctly stops detection and the camera. Error handling withconsole.erroris appropriate for a demo component.
96-158: LGTM! Face detection control methods are well-implemented.The
startFaceDetectionmethod correctly configures options and sets up the listener to update reactive signals. ThestopFaceDetectionmethod properly cleans up resources, and the toggle logic is straightforward. The reference tosmilingProbabilityis valid as it's defined in theDetectedFaceinterface.
160-162: LGTM! Modal dismissal follows standard Ionic patterns.The
closemethod correctly uses theModalControllerto dismiss the modal.
167-198: LGTM! UI helper methods correctly transform normalized coordinates.The helper methods properly convert normalized coordinates to CSS percentage values for positioning and sizing face overlays. The inclusion of rotation support in
getFaceTransformis appropriate for accurate overlay alignment.
|
there is many file deleted again where not supposed to |
Real-time Face Detection and Position Tracking
Overview
Adds comprehensive real-time face detection and position tracking capabilities to the Capacitor Camera Preview plugin, enabling Snapchat-style filters and AR effects.
Features
Core Functionality
✅ Real-time face detection with position tracking across frames
✅ Facial landmark detection (eyes, nose, mouth, ears, cheeks)
✅ Face orientation tracking (roll, yaw, pitch angles)
✅ Face classifications (smiling, eye open probability - Android only)
✅ Multi-face support (configurable max faces)
✅ Performance modes (fast/accurate)
Platform Support
iOS: Apple Vision Framework (built-in, iOS 11+)
Android: Google ML Kit Face Detection (free, part of Google Play Services)
Web: Not supported (throws error with helpful message)
Summary by CodeRabbit
Release Notes
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.