Skip to content

Conversation

@jokabuyasina
Copy link

@jokabuyasina jokabuyasina commented Dec 15, 2025

Real-time Face Detection and Position Tracking

Overview
Adds comprehensive real-time face detection and position tracking capabilities to the Capacitor Camera Preview plugin, enabling Snapchat-style filters and AR effects.

Features
Core Functionality
✅ Real-time face detection with position tracking across frames
✅ Facial landmark detection (eyes, nose, mouth, ears, cheeks)
✅ Face orientation tracking (roll, yaw, pitch angles)
✅ Face classifications (smiling, eye open probability - Android only)
✅ Multi-face support (configurable max faces)
✅ Performance modes (fast/accurate)

Platform Support
iOS: Apple Vision Framework (built-in, iOS 11+)
Android: Google ML Kit Face Detection (free, part of Google Play Services)
Web: Not supported (throws error with helpful message)

Summary by CodeRabbit

Release Notes

  • New Features

    • Added real-time face detection with tracking and facial landmark recognition
    • Configurable detection options including performance mode selection and maximum face limits
    • Face detection demo app with interactive guidance overlay and live filter capabilities
    • Lighting quality assessment to optimize detection conditions
  • Documentation

    • Comprehensive face detection API documentation with usage examples
    • Enhanced example app demonstrating face detection features and controls

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

Walkthrough

This pull request introduces comprehensive real-time face detection capabilities to the capacitor-camera-preview plugin. It adds native implementations for both Android (using ML Kit) and iOS (using Vision framework), along with coordinate transformation, face alignment validation, and lighting quality detection utilities. An example app demonstrates the feature with interactive UI components for face overlays, guidance, and live demo pages.

Changes

Cohort / File(s) Summary
Plugin API Definitions
src/definitions.ts, src/web.ts
TypeScript type definitions for face detection options, results, detected faces, landmarks, and bounds; web platform stubs returning "not supported" errors for face detection methods.
Android Face Detection Core
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java, FaceAlignmentValidator.java, FaceCoordinateTransformer.java, LightingQualityDetector.java
New utility and manager classes orchestrating ML Kit-based face detection, coordinate normalization, pose/alignment validation, and lighting analysis with frame throttling and thermal management.
Android Camera Integration
android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java, CameraXView.java
Plugin methods (startFaceDetection, stopFaceDetection, isFaceDetectionRunning) and CameraXView enhancements to bind ImageAnalysis, enable/disable detection with 720p target and 20–25 FPS capping.
Android Build Config
android/build.gradle
Added ML Kit Face Detection dependency (v16.1.7) and Java 17 compiler target.
iOS Face Detection Core
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift, FaceAlignmentValidator.swift, FaceCoordinateTransformer.swift, LightingQualityDetector.swift
Swift implementations mirroring Android: Vision-based detection, pose/alignment validation, coordinate transformation accounting for orientation and videoGravity, and lighting quality analysis.
iOS Camera Integration
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift, CameraController.swift
Plugin methods for face detection control, app lifecycle hooks, frame-rate optimization for detection, and delegation pattern for result streaming.
iOS Tests
ios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift
Updated test module import and class name; replaced echo test with plugin version verification test.
Example App: Face Detection Pages
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.{html,scss,ts}
Full-featured face detection demo page with draggable stats sheet, performance metrics, guidance overlay, landmark display, and FPS calculation.
Example App: Face Filter Component
example-app/src/app/components/face-filter-demo/face-filter-demo.component.{html,scss,ts}
Standalone face filter UI with sunglasses overlay, bounding boxes, landmarks, detection controls, and reactive face tracking.
Example App: Guidance Overlay
example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts
Guidance UI component with alignment state, feedback messages, animated oval guide, and success indicator.
Example App: Camera View Updates
example-app/src/app/pages/camera-view/camera-view.page.{html,ts}
Added face detection configuration section with performance mode, landmarks, tracking toggles, max faces slider, and test/demo buttons.
Example App: Service Layer
example-app/src/app/core/capacitor-camera-preview.service.ts, example-app/src/app/services/face-alignment.service.ts
Wrappers for face detection API calls with listener management; face alignment validation service with configurable thresholds and feedback.
Example App: Media Viewer & Routes
example-app/src/app/components/media-viewer/media-viewer.component.ts, example-app/src/app/app.routes.ts
Added swipe gesture and lifecycle hooks to media viewer; added face-detection-demo route with lazy loading.
Example App: Global & Build Config
example-app/src/global.scss, example-app/package.json
CSS for face-detection-active styling; build script updated to sync iOS with Capacitor; TypeScript downgraded to 5.8.0.
Example App: iOS/Android Toolchain
example-app/ios/App/Podfile, example-app/ios/App/App.xcodeproj/project.pbxproj, example-app/ios/App/App.xcscheme, example-app/ios/App/IDEWorkspaceChecks.plist, example-app/android/capacitor.settings.gradle
Updated pod paths to Bun cache, updated code-signing team IDs, scheme version bump, workspace config additions, and Gradle paths for Capacitor modules.
Documentation & Config Cleanup
README.md, example-app/README.md, CapgoCameraPreview.podspec, example-app/.browserslistrc, example-app/.editorconfig, example-app/.eslintrc.json
Added comprehensive face detection API docs and type reference; example app README with setup and troubleshooting; removed ESLint/editor config files.
Root Package Config
package.json
Version bumped to 8.0.5; added homepage field; prettier script and plugins updated.

Sequence Diagram(s)

sequenceDiagram
    participant JS as JavaScript/App
    participant Plugin as Camera Plugin
    participant Detector as FaceDetectionManager
    participant MLKit as ML Kit / Vision
    participant Listener as JS Event Listener

    JS->>Plugin: startFaceDetection(options)
    activate Plugin
    Plugin->>Detector: initialize with options
    Plugin->>Detector: start()
    deactivate Plugin
    
    loop Frame Processing
        Detector->>MLKit: processFrame(ImageProxy)
        activate MLKit
        MLKit-->>Detector: VNFaceObservation[]
        deactivate MLKit
        
        Detector->>Detector: transform bounds & landmarks
        Detector->>Detector: apply throttling/motion detection
        Detector->>Plugin: onFaceDetectionResult(JSONObject)
    end
    
    Plugin->>Listener: emit faceDetection event
    Listener->>JS: callback(FaceDetectionResult)
    
    JS->>Plugin: stopFaceDetection()
    activate Plugin
    Plugin->>Detector: stop()
    Plugin->>MLKit: release detector
    deactivate Plugin
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • FaceDetectionManager implementations (Android & iOS): Dense frame processing, ML Kit/Vision integration, throttling logic, and coordinate normalization; verify correct lifecycle management and thread safety.
  • CameraXView.java enableFaceDetection/disableFaceDetection: Coordinate binding changes, ImageAnalysis lifecycle, and frame-routing to detector; ensure proper rebinding and resource cleanup.
  • CameraController.swift frame-rate & preset logic: Verify 720p targeting, FPS capping (20–25), and correct restoration of original settings post-detection.
  • FaceCoordinateTransformer (both platforms): Complex geometric transformations with rotation and aspect-ratio handling; verify correctness across all device orientations and videoGravity modes.
  • Example app face-detection-demo.page.ts: Draggable sheet implementation, FPS calculation, and requestAnimationFrame batching; ensure smooth performance and proper state cleanup.
  • Cross-platform API consistency: Verify FaceDetectionOptions, FaceDetectionResult, and listener semantics match between Android, iOS, and TypeScript definitions.

Possibly related PRs

Poem

🐰 A hoppy ode to face detection:

Whiskers twitching, frames align,
ML models do their fine design,
Landmarks spotted, angles true,
Sunglasses filters—what a view!
Detection hops across the screen, 🌟

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 65.31% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Face detection and position tracking' clearly summarizes the main feature addition and is directly relevant to the substantial changeset introducing face detection capabilities.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch face-detection

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 96770ab and 4b112d8.

📒 Files selected for processing (1)
  • example-app/src/app/pages/camera-view/keep (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build_android

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jokabuyasina jokabuyasina requested a review from riderx December 15, 2025 19:08
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)

996-1014: Call faceDetectionManager.stop() when camera stops to release Vision framework resources.

The CameraController.cleanup() method should call faceDetectionManager?.stop() before cleaning up camera resources. The FaceDetectionManager has proper cleanup logic (waiting for in-flight Vision requests, resetting tracking state), but it's never invoked during the camera stop flow, leaving Vision framework resources and background dispatch queues potentially active.

ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)

439-485: Frame‑rate tuning is only partially effective and currently applies even without face detection

Two things to consider in this block:

  • configureSessionPreset(for:) is called before configureDeviceInputs(...) in prepare(...), so on first run currentCameraPosition and the front/rear devices are still nil. configureFrameRateForFaceDetection() will just early‑return, meaning the 20–25 FPS optimization only kicks in on later calls (e.g. updateAspectRatio). If you want consistent behavior, consider invoking configureFrameRateForFaceDetection() again after inputs are configured (or move the call behind a point where currentCameraPosition and the active device are known).
  • configureSessionPreset(for:) unconditionally calls configureFrameRateForFaceDetection(). That means even apps that never use face detection will get their active format and FPS clamped around 20–25 FPS, which may be surprising for high‑FPS video or non‑Vision use cases. Gating this behind a flag or faceDetectionManager?.isRunning() == true (or wiring it from the plugin’s startFaceDetection path) would avoid changing baseline behavior for non‑face‑detection consumers.

Additionally, the new log lines use escaped interpolation:

print("[CameraController] Configured frame rate: \\(targetFPS) FPS")
print("[CameraController] Failed to configure frame rate: \\(error)")

Those will literally print \(targetFPS) / \(error) instead of the values. Dropping one backslash fixes it:

- print("[CameraController] Configured frame rate: \\(targetFPS) FPS")
+ print("[CameraController] Configured frame rate: \(targetFPS) FPS")

- print("[CameraController] Failed to configure frame rate: \\(error)")
+ print("[CameraController] Failed to configure frame rate: \(error)")

Also applies to: 539-589

README.md (1)

353-367: Face-detection docs mostly align, but some field optionality and options are out of sync with code

The new README sections do a good job exposing the face‑detection API surface (listener, start/stop, running check, and the core types), but there are a couple of mismatches with the actual typings/implementation:

  • In DetectedFace, the README tables show trackingId, rollAngle, yawAngle, pitchAngle, and landmarks as required. In src/definitions.ts, these fields are defined as optional (?) and can legitimately be absent (e.g., tracking disabled, platform doesn’t provide a given angle, landmarks off). The same applies to the individual fields in FaceLandmarks. Marking them as optional in the doc tables would better reflect real behavior.
  • FaceDetectionOptions in the README documents performanceMode, trackingEnabled, detectLandmarks, maxFaces, and minFaceSize, but the TS/examples also pass a detectClassifications flag. If classifications are meant to be public API, it should be included here (and wired through the native implementations); otherwise, consider omitting it from examples to avoid confusion.

Once you update the JSDoc on the TS definitions, re‑running docgen should bring this section into full alignment with the actual API.

Also applies to: 919-937, 1092-1135, 1302-1393

🧹 Nitpick comments (37)
CapgoCameraPreview.podspec (1)

16-18: Remove extra blank line before block end.

RuboCop flags an extra empty line before the block closing. This is a minor style issue that can be resolved by removing the blank line.

Apply this diff to align with Ruby style conventions:

   s.swift_version = '5.1'
-
 end
example-app/ios/App/App.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (1)

1-8: Add this file to .gitignore instead of committing it.

IDEWorkspaceChecks.plist is a machine-generated Xcode workspace configuration file that should not be committed to version control. Different developers or Xcode versions may generate different contents, leading to unnecessary diffs and merge conflicts.

Move this file to .gitignore to maintain a clean repository and avoid IDE-specific configuration churn.

example-app/ios/App/App.xcodeproj/xcshareddata/xcschemes/App.xcscheme (1)

4-4: LGTM! Auto-generated Xcode version bump.

This version attribute change is automatically managed by Xcode and is harmless. It's incidental to the face detection features being added and could optionally be omitted from the PR to keep it focused on the core functionality.

example-app/src/app/components/media-viewer/media-viewer.component.ts (3)

1-6: Imports and DI usage are solid; consider tightening ElementRef typing

Using inject(...) for ModalController, GestureController, and ElementRef in a standalone component is idiomatic and keeps the constructor clean. For a bit more type safety you could declare protected readonly el = inject<ElementRef<HTMLElement>>(ElementRef); so the host element type is explicit, but this is purely optional.

Also applies to: 24-28


43-77: Swipe-down gesture works; consider preserving any pre-existing transforms

The swipe behavior and opacity feedback are clear and the thresholded close vs. snap-back logic looks good. One subtle improvement: if ion-content ever has a non-empty transform from CSS/animations, resetting it with content.style.transform = '' on snap-back will drop that. You could capture the initial transform in onStart and restore it in the snap-back branch to be fully non-destructive.


79-89: Backdrop click handling works; consider moving to a template/HostListener pattern

Using addEventListener on .media-container and checking event.target === container correctly limits closing to backdrop clicks. Since Angular will destroy the element, this won’t practically leak, but for consistency with the rest of the app you might later move this to a (click) binding or @HostListener('click') so Angular manages subscription/teardown for you.

example-app/ios/App/Podfile (2)

1-1: Bun-specific pods_helpers path is fine but tightly couples to Capacitor/Bun versions

The require_relative into .bun/@[email protected]+… matches Bun’s store layout and should correctly expose assertDeploymentTarget, but it hard-codes both the Capacitor version and Bun’s hash into the Podfile. That’s OK if this file is regenerated as part of your Bun/Capacitor upgrade flow, but otherwise any version bump will require a manual edit here.

If that’s intentional, consider adding a short comment explaining that this path is Bun-generated and must be updated when upgrading Capacitor, so future maintainers don’t “fix” it to a different layout by mistake.


12-19: Pod paths look consistent with Bun layout; consider DRY-ing Capacitor paths and verify CapgoCameraPreview intent

The pod entries all point into Bun’s .bun store and look consistent with a Bun-managed Capacitor 7 setup, including the additional CapacitorApp/Haptics/Keyboard/StatusBar and CapgoHomeIndicator pods. Two small maintainability points:

  • You repeat the long Capacitor iOS path twice for Capacitor and CapacitorCordova. You can slightly simplify future upgrades by binding it to a local variable:
 def capacitor_pods
-  pod 'Capacitor', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/ios'
-  pod 'CapacitorCordova', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/ios'
+  capacitor_ios_path = '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/ios'
+  pod 'Capacitor', :path => capacitor_ios_path
+  pod 'CapacitorCordova', :path => capacitor_ios_path
   pod 'CapacitorApp', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/app'
   pod 'CapacitorHaptics', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/haptics'
   pod 'CapacitorKeyboard', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/keyboard'
   pod 'CapacitorStatusBar', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capacitor/status-bar'
   pod 'CapgoCameraPreview', :path => '../../node_modules/.bun/@capgo+camera-preview@file+../node_modules/@capgo/camera-preview'
   pod 'CapgoHomeIndicator', :path => '../../node_modules/.bun/@[email protected]+1d43a6a1837411c8/node_modules/@capgo/home-indicator'
 end
  • For CapgoCameraPreview, the Bun file+../node_modules/@capgo/camera-preview path is a bit opaque to someone not familiar with Bun’s store layout. It would be good to double-check that pod install picks up the intended podspec from this location and, if so, maybe add a short comment like “# Bun store path for local @capgo/camera-preview dependency” for clarity.

If your goal is for the example app to always use the version from bun.lockb, this setup is reasonable; if instead you want it to track the plugin source in this monorepo, you might consider switching :path to a direct relative path to the plugin project instead.

src/web.ts (1)

1260-1263: Use proper FaceDetectionOptions type instead of any for face detection config.

The FaceDetectionOptions type is defined in definitions.ts and used in the interface at line 1179, but the web.ts implementation uses any. Update the parameter type for consistency with the rest of the codebase and to provide better IDE support:

-  async startFaceDetection(_config?: any): Promise<void> {
+  async startFaceDetection(_config?: FaceDetectionOptions): Promise<void> {

Also note: The JSDoc parameter is documented as @param _options but the actual parameter name is _config—align these.

example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (2)

1-3: CommonModule import may be unnecessary.

With Angular 17+ control flow syntax (@if, @else) being used in the template, CommonModule is not required unless you're using other directives like NgClass or NgStyle. The [class.aligned] binding is native template syntax and doesn't require CommonModule.

-import { Component, Input, signal } from '@angular/core';
-import { CommonModule } from '@angular/common';
+import { Component, Input, signal } from '@angular/core';
 @Component({
   selector: 'app-face-guidance-overlay',
   standalone: true,
-  imports: [CommonModule],
+  imports: [],

114-155: Unused CSS: countdown timer styles are not rendered.

The .countdown-timer, .countdown-number, .countdown-text classes and @keyframes countdown-pulse animation are defined but not used in the template. If this is planned for future implementation, consider adding a TODO comment or removing the dead code.

ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (1)

182-193: Hardcoded thresholds in recommendedExposureCompensation don't use configurable values.

The exposure compensation calculation uses hardcoded values 0.4 and 0.7, while the class allows configurable minBrightness (default 0.25) and maxBrightness (default 0.85) thresholds. This inconsistency could lead to unexpected behavior when custom thresholds are provided.

The Android counterpart uses the same hardcoded values, so this may be intentional for cross-platform consistency.

Consider using the configured thresholds or dedicated compensation thresholds:

         public var recommendedExposureCompensation: Float {
             if isTooDark {
                 // Increase exposure for dark scenes
-                return Float(min(2.0, (0.4 - brightnessLevel) * 3.0))
+                return Float(min(2.0, (optimalMin - brightnessLevel) * 3.0))
             } else if isTooBright {
                 // Decrease exposure for bright scenes
-                return Float(max(-2.0, (0.7 - brightnessLevel) * 3.0))
+                return Float(max(-2.0, (optimalMax - brightnessLevel) * 3.0))
             }
             return 0.0 // No adjustment needed
         }

Note: This would require passing the thresholds to LightingResult or making it a method on the detector class.

example-app/README.md (1)

126-144: Add language specifier to fenced code block.

The project structure code block is missing a language specifier. While directory trees don't have a standard language, you can use text or plaintext to satisfy linters.

-```
+```text
 example-app/
 ├── src/
example-app/src/app/services/face-alignment.service.ts (1)

43-43: Remove empty constructor.

The empty constructor is unnecessary and can be removed. Angular's DI will work without it.

-  constructor() {}
-
   /**
    * Update validation configuration
    */
example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (2)

48-48: Avoid any type for the listener reference.

Using any loses type safety. The listener returned by addListener should be typed as PluginListenerHandle.

-  private faceDetectionListener: any;
+  private faceDetectionListener: import('@capacitor/core').PluginListenerHandle | null = null;

143-149: Await async methods to prevent race conditions.

toggleFaceDetection calls async methods without awaiting, which could cause race conditions if the user toggles rapidly.

-  toggleFaceDetection() {
+  async toggleFaceDetection() {
     if (this.isFaceDetectionActive()) {
-      this.stopFaceDetection();
+      await this.stopFaceDetection();
     } else {
-      this.startFaceDetection();
+      await this.startFaceDetection();
     }
   }
android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (2)

160-202: Consider encapsulating AlignmentResult fields.

Using public fields directly is unconventional in Java. While acceptable for simple DTOs, consider using private fields with getters for better encapsulation and consistency with Java conventions.

Alternatively, this pattern is acceptable for internal utility classes where simplicity is preferred over strict encapsulation.


193-200: Consider importing ArrayList at the top of the file.

Using the fully qualified name java.util.ArrayList inline works but importing it at the top would be cleaner.

 package app.capgo.capacitor.camera.preview;
+
+import java.util.ArrayList;

Then use ArrayList<String> directly on line 194.

ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (1)

40-45: Remove or guard the debug print statement.

The print() statement in the initializer will log to the console in production builds. Consider removing it or using a conditional check / proper logging framework.

-        print("""
-        [FaceCoordTransformer] Created: frame=\(Int(frameWidth))x\(Int(frameHeight)), \
-        preview=\(Int(previewWidth))x\(Int(previewHeight)), \
-        gravity=\(videoGravity.rawValue), orientation=\(orientation.rawValue)
-        """)
+        #if DEBUG
+        print("""
+        [FaceCoordTransformer] Created: frame=\(Int(frameWidth))x\(Int(frameHeight)), \
+        preview=\(Int(previewWidth))x\(Int(previewHeight)), \
+        gravity=\(videoGravity.rawValue), orientation=\(orientation.rawValue)
+        """)
+        #endif
ios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift (1)

4-9: Test coverage is minimal for the new face detection feature.

This test only verifies the plugin instantiates. Consider adding tests for the new face detection functionality (start/stop detection, result parsing, error handling).

Would you like me to generate unit test stubs for the face detection features?

example-app/src/app/pages/face-detection-demo/face-detection-demo.page.html (1)

54-67: Consider using more semantically appropriate icons for the control chips.

The current icons (flash, flash-off, camera) don't clearly represent their functions (Guidance, Debug, Landmarks). Consider:

  • Guidance: compass-outline or navigate-outline
  • Debug: bug-outline or code-outline
  • Landmarks: location-outline or pin-outline
-    <ion-chip (click)="toggleGuidanceOverlay()" [color]="showGuidanceOverlay() ? 'primary' : 'medium'">
-      <ion-icon name="flash"></ion-icon>
+    <ion-chip (click)="toggleGuidanceOverlay()" [color]="showGuidanceOverlay() ? 'primary' : 'medium'">
+      <ion-icon name="compass-outline"></ion-icon>
       Guidance
     </ion-chip>
-    <ion-chip (click)="toggleDebugOverlay()" [color]="showDebugOverlay() ? 'primary' : 'medium'">
-      <ion-icon name="flash-off"></ion-icon>
+    <ion-chip (click)="toggleDebugOverlay()" [color]="showDebugOverlay() ? 'primary' : 'medium'">
+      <ion-icon name="bug-outline"></ion-icon>
       Debug
     </ion-chip>
-    <ion-chip (click)="toggleLandmarks()" [color]="showLandmarks() ? 'primary' : 'medium'">
-      <ion-icon name="camera"></ion-icon>
+    <ion-chip (click)="toggleLandmarks()" [color]="showLandmarks() ? 'primary' : 'medium'">
+      <ion-icon name="location-outline"></ion-icon>
       Landmarks
     </ion-chip>
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (1)

3549-3555: Minor thread safety consideration.

The analyzer callback runs on cameraExecutor while faceDetectionManager can be set to null from the main thread in disableFaceDetection(). The null check protects against NPE, but consider making faceDetectionManager volatile for visibility guarantees across threads.

-    private FaceDetectionManager faceDetectionManager;
+    private volatile FaceDetectionManager faceDetectionManager;
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.scss (4)

53-69: Consider will-change: transform instead of will-change: contents.

The will-change: contents value has limited browser support and may not provide the intended optimization. For GPU-accelerated overlays, will-change: transform or will-change: opacity are more reliable.

 .face-overlay {
   position: fixed;
   top: 0;
   left: 0;
   width: 100%;
   height: 100%;
   pointer-events: none;
   z-index: 9000;
-  // Performance: Enable GPU acceleration and prevent layout shifts
-  will-change: contents;
+  // Performance: Enable GPU acceleration
+  will-change: transform;
   transform: translateZ(0);

178-198: Extremely high z-index may conflict with other UI layers.

The z-index: 100000 on .stats-sheet is very high and could unexpectedly overlap system UI or modal dialogs. Consider using a more conservative value or a CSS custom property for coordinated z-index management.

 .stats-sheet {
   position: fixed;
   left: 50%;
   transform: translateX(-50%);
   top: 0%; /* Default starting position, overridden by inline style */
   width: min(560px, calc(100% - 24px));
   max-height: calc(95vh - var(--ion-safe-area-top, 0px));
-  z-index: 100000; /* Above all UI including header */
+  z-index: 10001; /* Above header (10000) */

192-192: Comment contradicts the actual transition value.

The comment says "Smooth dragging without transition delay" but transition: top 0ms means no transition at all. If smooth dragging is intended, a small duration like 16ms or 32ms would be appropriate. If instant updates are intended, the comment should reflect that.

-  transition: top 0ms; /* Smooth dragging without transition delay */
+  transition: top 0ms; /* Instant position updates for responsive dragging */

243-245: Remove empty lines between rule blocks.

There are redundant empty lines that could be cleaned up for consistency.

android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java (2)

2046-2051: Face detection errors are silently logged without notifying JS.

The onFaceDetectionError callback only logs the error. Consider emitting an error event to JS so the application can handle detection failures gracefully.

                     @Override
                     public void onFaceDetectionError(String error) {
                         Log.e(TAG, "Face detection error: " + error);
+                        JSObject errorData = new JSObject();
+                        errorData.put("error", error);
+                        notifyListeners("faceDetectionError", errorData);
                     }

103-111: FaceDetectionManager is not cleaned up in handleOnDestroy.

When the plugin is destroyed, faceDetectionManager should be stopped and nullified to prevent resource leaks.

     @Override
     protected void handleOnDestroy() {
         super.handleOnDestroy();
+        if (faceDetectionManager != null) {
+            faceDetectionManager.stopDetection();
+            faceDetectionManager = null;
+        }
         if (cameraXView != null) {
             cameraXView.stopSession();
             cameraXView = null;
         }
         lastSessionConfig = null;
     }
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)

2415-2417: Face detection errors are silently logged without notifying JS.

Similar to Android, the faceDetectionDidFail delegate method only logs the error. Consider emitting an error event to JS for consistent error handling across platforms.

     func faceDetectionDidFail(error: Error) {
         print("[FaceDetection] Detection failed: \(error.localizedDescription)")
+        notifyListeners("faceDetectionError", data: ["error": error.localizedDescription])
     }
example-app/src/app/core/capacitor-camera-preview.service.ts (2)

410-421: Multiple listener registrations will orphan previous listeners.

If addFaceDetectionListener is called multiple times without first calling removeFaceDetectionListener, the previous listener handle is overwritten and the old listener becomes orphaned (still active but unreferenceable).

   async addFaceDetectionListener(
     callback: (result: FaceDetectionResult) => void,
   ): Promise<void> {
+    // Remove existing listener before adding new one
+    await this.removeFaceDetectionListener();
     this.faceDetectionListener = await this.#cameraView.addListener(
       'faceDetection',
       (result: FaceDetectionResult) => {
         this.ngZone.run(() => {
           callback(result);
         });
       },
     );
   }

385-392: Consider removing face detection listener when stopping detection.

For cleaner resource management, stopFaceDetection could automatically remove the listener to prevent stale callbacks.

   async stopFaceDetection(): Promise<void> {
     await this.#cameraView.stopFaceDetection();
+    await this.removeFaceDetectionListener();
   }
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (1)

1-427: Face-detection demo page is well-structured and lifecycle-aware

This page is nicely put together:

  • Uses signals/computed + ChangeDetectionStrategy.OnPush appropriately for a high‑frequency UI.
  • Starts the camera and face detection once in ngOnInit, and cleans up correctly in ngOnDestroy by stopping detection, removing the listener, stopping the camera, cancelling requestAnimationFrame, and unregistering the resize listener.
  • Overlays for face boxes and landmarks use normalized coordinates and cached screenWidth/screenHeight, which matches the full‑screen preview (x=0, y=0, width/height=window.inner*) used in startCameraWithFaceDetection().

If you later want to support non‑full‑screen previews, you could switch getFaceBoxStyle/getLandmarkStyle to derive dimensions from frameWidth/frameHeight plus the actual preview container size, but that’s an optional enhancement for now.

android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (2)

85-90: Consider using @OptIn annotation for experimental API usage.

The @ExperimentalGetImage annotation on Line 86 is applied locally, but this experimental API usage should ideally be acknowledged at the method level for clarity.

+    @androidx.annotation.OptIn(markerClass = androidx.camera.core.ExperimentalGetImage.class)
     private double calculateBrightness(ImageProxy image) {
-        @androidx.camera.core.ExperimentalGetImage
         android.media.Image mediaImage = image.getImage();

129-135: Consider using a builder pattern or constructor for LightingResult.

Public mutable fields are convenient but allow inconsistent states. Since this is an internal utility, this is acceptable, but consider making fields final and using a constructor for immutability.

ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (3)

282-282: Redundant nil initialization.

Per SwiftLint hint, initializing an optional variable with nil is redundant since optionals default to nil.

         // Extract landmarks if requested
-        var landmarks: FaceLandmarks? = nil
+        var landmarks: FaceLandmarks?

407-424: Motion detection implementation doesn't actually detect motion.

The current implementation only checks if frame timestamps differ, which is always true for consecutive camera frames. The comment on Line 415 acknowledges this: "More sophisticated: compare pixel data or use optical flow."

Since this will always return true for any two distinct frames, motionDetectionEnabled effectively does nothing useful. Consider either implementing actual motion detection or removing this feature to avoid misleading API surface.


477-484: FaceDetectionOptions should conform to Codable for consistency.

All other data models (FaceDetectionResult, DetectedFace, FaceBounds, FaceLandmarks, Point) conform to Codable, but FaceDetectionOptions does not. If options need to be serialized from JavaScript plugin calls, this could cause issues.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 595c038 and 311a50f.

⛔ Files ignored due to path filters (2)
  • bun.lock is excluded by !**/*.lock
  • example-app/bun.lock is excluded by !**/*.lock
📒 Files selected for processing (49)
  • .github/workflows/bump_version.yml (0 hunks)
  • .github/workflows/github-releases-to-discord.yml (0 hunks)
  • .github/workflows/test.yml (0 hunks)
  • .vscode/settings.json (0 hunks)
  • CapgoCameraPreview.podspec (1 hunks)
  • README.md (6 hunks)
  • android/build.gradle (1 hunks)
  • android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java (3 hunks)
  • android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (4 hunks)
  • android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (1 hunks)
  • android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java (1 hunks)
  • android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1 hunks)
  • android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1 hunks)
  • example-app/.browserslistrc (0 hunks)
  • example-app/.editorconfig (0 hunks)
  • example-app/.eslintrc.json (0 hunks)
  • example-app/.vscode/extensions.json (0 hunks)
  • example-app/.vscode/settings.json (0 hunks)
  • example-app/README.md (1 hunks)
  • example-app/android/capacitor.settings.gradle (1 hunks)
  • example-app/ios/App/App.xcodeproj/project.pbxproj (2 hunks)
  • example-app/ios/App/App.xcodeproj/xcshareddata/xcschemes/App.xcscheme (1 hunks)
  • example-app/ios/App/App.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (1 hunks)
  • example-app/ios/App/Podfile (2 hunks)
  • example-app/package.json (2 hunks)
  • example-app/src/app/app.routes.ts (1 hunks)
  • example-app/src/app/components/face-filter-demo/face-filter-demo.component.html (1 hunks)
  • example-app/src/app/components/face-filter-demo/face-filter-demo.component.scss (1 hunks)
  • example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (1 hunks)
  • example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (1 hunks)
  • example-app/src/app/components/media-viewer/media-viewer.component.ts (2 hunks)
  • example-app/src/app/core/capacitor-camera-preview.service.ts (2 hunks)
  • example-app/src/app/pages/camera-view/camera-view.page.html (1 hunks)
  • example-app/src/app/pages/camera-view/camera-view.page.ts (3 hunks)
  • example-app/src/app/pages/camera-view/keep (1 hunks)
  • example-app/src/app/pages/face-detection-demo/face-detection-demo.page.html (1 hunks)
  • example-app/src/app/pages/face-detection-demo/face-detection-demo.page.scss (1 hunks)
  • example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (1 hunks)
  • example-app/src/app/services/face-alignment.service.ts (1 hunks)
  • example-app/src/global.scss (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (5 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (6 hunks)
  • ios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift (1 hunks)
  • src/definitions.ts (3 hunks)
  • src/web.ts (4 hunks)
💤 Files with no reviewable changes (9)
  • .github/workflows/test.yml
  • example-app/.vscode/extensions.json
  • .vscode/settings.json
  • .github/workflows/github-releases-to-discord.yml
  • example-app/.browserslistrc
  • example-app/.editorconfig
  • example-app/.vscode/settings.json
  • example-app/.eslintrc.json
  • .github/workflows/bump_version.yml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T15:37:50.891Z
Learnt from: Dalanir
Repo: Cap-go/capacitor-camera-preview PR: 217
File: ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift:1242-1246
Timestamp: 2025-08-29T15:37:50.891Z
Learning: In the iOS CameraController.swift handleTap method, the team wants to maintain the current behavior that forces exposureMode to .autoExpose and resets exposureTargetBias to 0.0 when tapping to focus, even though it overrides the user's current exposure mode setting.

Applied to files:

  • src/web.ts
  • ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift
  • ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift
🧬 Code graph analysis (14)
example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (1)
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (1)
  • Component (10-427)
example-app/src/app/services/face-alignment.service.ts (2)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (1)
  • AlignmentResult (160-202)
src/definitions.ts (1)
  • DetectedFace (456-503)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (9)
  • isRunning (103-105)
  • hasSignificantMotion (407-424)
  • getTrackingId (340-350)
  • extractLandmarks (306-334)
  • onAppBackground (429-432)
  • onAppForeground (436-440)
  • enableThermalThrottling (445-448)
  • disableThermalThrottling (451-454)
  • isThermalThrottlingActive (459-461)
src/web.ts (1)
src/definitions.ts (1)
  • ExposureMode (341-341)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (2)
ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift (1)
  • validate (73-148)
example-app/src/app/services/face-alignment.service.ts (1)
  • AlignmentResult (16-25)
example-app/src/app/core/capacitor-camera-preview.service.ts (1)
src/definitions.ts (2)
  • FaceDetectionOptions (384-424)
  • FaceDetectionResult (430-450)
ios/Tests/CameraPreviewPluginTests/CameraPreviewPluginTests.swift (1)
src/index.ts (1)
  • CameraPreview (10-10)
ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (1)
android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1)
  • LightingResult (129-151)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (5)
  • onAppForeground (436-440)
  • onAppBackground (429-432)
  • isRunning (103-105)
  • start (75-80)
  • stop (84-98)
android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1)
ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (2)
  • analyzeLighting (45-78)
  • calculateBrightness (82-109)
example-app/src/app/pages/face-detection-demo/face-detection-demo.page.ts (2)
example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (2)
  • Component (10-203)
  • FaceGuidanceState (4-8)
src/definitions.ts (2)
  • DetectedFace (456-503)
  • FaceDetectionResult (430-450)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (2)
  • transformBounds (55-122)
  • transformPoint (128-133)
example-app/src/app/pages/camera-view/camera-view.page.ts (3)
src/definitions.ts (2)
  • DetectedFace (456-503)
  • FaceDetectionResult (430-450)
example-app/src/app/core/capacitor-camera-preview.service.ts (1)
  • isRunning (77-79)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
  • isRunning (103-105)
example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (2)
src/definitions.ts (2)
  • DetectedFace (456-503)
  • FaceDetectionResult (430-450)
src/index.ts (1)
  • CameraPreview (10-10)
🪛 markdownlint-cli2 (0.18.1)
example-app/README.md

126-126: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 RuboCop (1.81.7)
CapgoCameraPreview.podspec

[convention] 17-18: Extra empty line detected at block body end.

(Layout/EmptyLinesAroundBlockBody)

🪛 SwiftLint (0.57.0)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift

[Warning] 282-282: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)

🔇 Additional comments (48)
example-app/package.json (2)

9-9: Verify build script alignment and platform constraints.

The build script now explicitly invokes bun run cap sync ios, which may cause issues on non-iOS platforms or in CI/CD environments that don't have iOS tooling available. Ensure this change works across all intended build environments (local development, CI/CD for Android/Web, etc.).


71-71: Clarify TypeScript downgrade from 5.9.0 to 5.8.0.

The TypeScript version has been downgraded, which is atypical unless there's a compatibility constraint. Verify that this downgrade is intentional and compatible with Angular 19.2.6 and the face detection implementation introduced in this PR.

Regarding Angular 19.2 compatibility with TypeScript 5.8, the downgrade is technically compatible with the Angular version in use. However, the downgrade from ~5.9.0 to ~5.8.0 is atypical and prevents access to TypeScript 5.9.x bugfix updates. Please document why this downgrade was necessary.

example-app/src/app/components/media-viewer/media-viewer.component.ts (1)

30-37: Gesture lifecycle wiring via Angular hooks looks correct

Initializing the gesture in ngAfterViewInit and destroying it in ngOnDestroy is the right pattern to avoid dangling listeners. No functional issues here; just ensure this component is not re-used outside a modal context where dismiss() might behave differently.

example-app/android/capacitor.settings.gradle (1)

1-21: LGTM - Auto-generated file updated correctly.

These path updates reflect the regenerated bun package hashes after running capacitor update. The changes are consistent across all Capacitor modules and align with the expected tooling behavior.

example-app/src/app/components/face-filter-demo/face-filter-demo.component.html (2)

16-24: LGTM - Good track expression with fallback.

The track face.trackingId || $index pattern provides a stable identity when trackingId is available while falling back to index when it's not. The face overlay structure with transform and size bindings looks well-designed for the AR filter use case.


95-111: LGTM - Face details section is well-guarded.

Proper null checks using @if blocks for optional properties (rollAngle, yawAngle, smilingProbability). The as face alias in line 98 is a clean pattern for accessing the first detected face.

src/web.ts (2)

1269-1272: Good API surface design for unsupported platform.

The asymmetric behavior is appropriate: stopFaceDetection() throws because attempting an unsupported operation should fail explicitly, while isFaceDetectionRunning() returns false as a safe query that allows consumers to check availability without try/catch.

Also applies to: 1279-1281


539-539: Clean pattern for suppressing unused parameter warnings.

Using void _param is an idiomatic way to acknowledge intentionally unused parameters without triggering linter warnings.

Also applies to: 1222-1224, 1239-1241

example-app/src/app/components/face-guidance-overlay/face-guidance-overlay.component.ts (1)

192-203: LGTM!

The component implementation correctly uses Angular signals with an Input setter pattern. The protected access modifier appropriately exposes the signal to the template while preventing external access.

ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift (2)

80-108: LGTM!

The brightness calculation handles multiple pixel formats correctly with proper memory management (defer for unlock). The sampling strategy (every 8th pixel) is a good performance optimization, and the ITU-R BT.601 luminance coefficients in the BGRA path are correct.


45-78: LGTM!

The lighting assessment logic is well-structured with clear thresholds and appropriate user feedback. The four-tier classification (too dark, too bright, acceptable, optimal) provides good granularity for guiding users.

src/definitions.ts (5)

376-424: LGTM!

The FaceDetectionOptions interface is well-documented with sensible defaults, platform annotations, and clear descriptions. The configuration options cover the key aspects of face detection performance tuning.


452-607: LGTM!

The face detection types are comprehensive and well-documented. Platform-specific differences (e.g., pitchAngle behavior, Android-only landmarks) are clearly annotated, which will help developers handle cross-platform differences appropriately.


609-684: Consider aligning FaceAlignmentFeedback with example app's AlignmentResult.

These types are defined in the plugin definitions but the example app's FaceAlignmentService defines its own AlignmentResult interface with a slightly different structure (uses allFeedback instead of allFeedback, which matches). The types could be consolidated to avoid potential confusion.

That said, having these types available in the plugin definitions is useful for consumers who want to implement their own face alignment validation.


1037-1064: LGTM!

The faceDetection event listener is well-documented with a practical usage example. The API follows the established pattern for other listeners in the plugin.


1144-1203: LGTM!

The face detection methods (startFaceDetection, stopFaceDetection, isFaceDetectionRunning) are well-documented with clear examples and follow the existing plugin API patterns. The JSDoc includes helpful platform annotations and error conditions.

example-app/README.md (2)

165-171: Verify iOS version requirement.

The documentation states "iOS 13.0 or higher" for face detection, but the PR description mentions "iOS 11+" for the Vision Framework. Please verify the actual minimum iOS version required for the face detection features implemented.


1-29: LGTM!

The README provides comprehensive documentation with clear prerequisites, installation steps, and feature descriptions. The structure is well-organized and will be helpful for developers setting up the example app.

example-app/src/app/services/face-alignment.service.ts (2)

115-133: Centering validation reports only one axis issue at a time.

The else-if chain means if the face is off-center both horizontally and vertically, only the horizontal issue will be reported. This appears intentional (matches the Android implementation) to give users one clear instruction at a time, which is reasonable UX. Just noting this design decision.


55-145: LGTM!

The validateFace method is well-structured with clear validation logic for each aspect of face alignment. The aggregation of individual validations into isValid and the feedback collection pattern align well with the Android counterpart (FaceAlignmentValidator.java), ensuring cross-platform consistency.

ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift (1)

1-189: Clean implementation of face alignment validation.

The validator is well-structured with sensible defaults and clear feedback messages. The logic correctly validates roll, pitch, yaw angles along with face size and centering. The AlignmentResult struct provides useful computed properties for accessing feedback.

example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (1)

155-169: Clarify transform positioning approach.

The translate() CSS function uses percentages relative to the element's own dimensions, not the container. For face overlay positioning, using left and top CSS properties (as percentages of the container) would be more intuitive. However, this may work correctly depending on how the template applies these transforms.

example-app/src/app/components/face-filter-demo/face-filter-demo.component.scss (1)

1-142: Well-structured styling for the face filter overlay.

The SCSS is clean with appropriate layering (z-index), smooth transitions for face tracking, and responsive adjustments. The use of pointer-events: none on overlays ensures camera interaction isn't blocked.

android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java (1)

9-18: Cross-platform consistency confirmed.

The default threshold values match the iOS implementation exactly, ensuring consistent behavior across platforms.

ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift (2)

55-122: Coordinate transformation logic is well-implemented.

The transformBounds method correctly handles aspect ratio differences and all three video gravity modes (resizeAspect, resizeAspectFill, resize). The orientation-based dimension swapping at line 65-67 is a good detail.


142-186: Orientation transformations are correctly implemented.

The rotation formulas for each orientation case are mathematically sound. The method properly preserves width/height dimensions while rotating the coordinate origin.

example-app/src/global.scss (1)

37-64: Face detection overlay styles look good.

The use of !important is justified here for overriding Ionic's default opaque backgrounds. The ::part() selectors correctly target Ionic's shadow DOM parts to ensure the camera feed shows through.

example-app/src/app/app.routes.ts (1)

30-36: Route addition follows existing patterns.

The lazy-loaded route is correctly placed before the wildcard redirect and follows the established naming convention.

example-app/src/app/pages/camera-view/camera-view.page.html (1)

592-694: Face Detection UI card is well-structured.

The controls provide comprehensive options for configuring face detection (performance mode, landmarks, tracking, max faces). The conditional rendering with @if blocks appropriately shows dependent controls only when detection is enabled.

example-app/src/app/pages/face-detection-demo/face-detection-demo.page.html (1)

71-114: Stats sheet implementation looks comprehensive.

The draggable stats sheet has good structure with performance metrics and per-face details. The dual touch/pointer event binding provides cross-platform support. Ensure the component code handles potential duplicate events on touch devices that support both event types.

android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (2)

149-151: Face detection fields appropriately added.

The fields are correctly scoped and placed with other camera-related state.


3540-3560: No action required. The processImageProxy method in FaceDetectionManager properly closes the ImageProxy in all execution paths—including early returns for detection disabled, app in background, thermal throttling, frame skipping, and throttling conditions, as well as after processing completes (both success and error paths at lines 219 and 229).

Likely an incorrect or invalid review comment.

android/build.gradle (1)

65-67: ML Kit Face Detection dependency is correctly configured.

Version 16.1.7 is the latest stable release for the bundled model approach. The dependency placement alongside CameraX dependencies is appropriate.

android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java (3)

71-76: LGTM!

Proper null-check before pausing face detection when app goes to background.


87-92: LGTM!

Proper null-check before resuming face detection when app comes to foreground.


1998-2061: LGTM!

The startFaceDetection method properly validates camera state, parses options, initializes the manager lazily, and wires up the listener for results. Good use of null-safe option retrieval.

android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java (2)

58-130: LGTM!

The coordinate transformation logic correctly handles rotation and aspect ratio differences. The step-by-step approach (normalize → rotate → scale/offset) is clear and maintainable.


138-144: Zero-size rect approach works but is slightly indirect.

The transformPoint method creates a zero-width/height rect to reuse transformBounds. While functionally correct (the scaling preserves the origin), a more direct approach could be cleaner. However, this matches the iOS implementation pattern, so consistency is maintained.

ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (4)

80-85: LGTM!

Face detection methods properly registered in the plugin methods array.


546-548: LGTM!

Proper lifecycle integration - face detection is paused/resumed with app state changes. The appWillResignActive handler ensures detection pauses when the app goes to background.

Also applies to: 556-558, 561-565


839-853: LGTM!

The preview layering logic correctly handles the toBack flag, inserting the preview at index 0 with transparency when behind the web view, or adding it above with a high zPosition when in front.


2351-2385: LGTM!

The startFaceDetection method properly validates camera state, parses all options with sensible defaults, creates the manager lazily, and sets up the delegate.

example-app/src/app/core/capacitor-camera-preview.service.ts (2)

17-21: LGTM!

Proper imports for face detection types and Capacitor listener handle.


371-433: LGTM!

Well-documented face detection API with proper TypeScript types. The NgZone integration ensures Angular change detection is triggered correctly when face detection events arrive.

ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (2)

113-115: FaceDetectionManager wiring into sample buffer pipeline looks solid

Using a nullable faceDetectionManager plus isRunning() checks keeps the hot path cheap when detection is disabled, and the first‑frame callback path correctly avoids double‑processing by returning early only when no capture is in progress. This integration reads clean and doesn’t change existing sample capture behavior.

Also applies to: 2405-2424


591-620: Initial zoom guard is a good safety improvement

The new guard around selecting the active device in setInitialZoom(level:) prevents crashes when no camera is available and provides a clear log for that edge case. This is a safe and sensible change.

android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java (1)

141-148: Exposure compensation formula may produce unexpected values.

When isTooDark is true and brightnessLevel is very low (e.g., 0.0), the formula (0.4 - 0.0) * 3.0 = 1.2 yields a moderate adjustment. However, when isTooBright is true with brightnessLevel near 1.0, the formula (0.7 - 1.0) * 3.0 = -0.9 gives a smaller absolute adjustment than expected for extreme brightness.

The iOS counterpart uses different thresholds (0.5 and 0.6). Consider aligning the formulas or documenting the rationale for the asymmetry.

ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)

39-41: Comment doesn't match code: frameSkipCount = 1 means processing every 2nd frame, not every frame.

The comment on Line 40 says "Process every 2nd frame for better real-time tracking", but the default initialization is frameSkipCount = 1. Given Line 122's logic currentFrameCount % (frameSkipCount + 1) == 0, this means every 2nd frame is processed. If the intent is to process every frame for best real-time tracking, set frameSkipCount = 0.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 16, 2025

Note

Docstrings generation - SUCCESS
Generated docstrings for this pull request at #269

coderabbitai bot added a commit that referenced this pull request Dec 16, 2025
Docstrings generation was requested by @jokabuyasina.

* #268 (comment)

The following files were modified:

* `android/src/main/java/app/capgo/capacitor/camera/preview/CameraPreview.java`
* `android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java`
* `android/src/main/java/app/capgo/capacitor/camera/preview/FaceAlignmentValidator.java`
* `android/src/main/java/app/capgo/capacitor/camera/preview/FaceCoordinateTransformer.java`
* `android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java`
* `android/src/main/java/app/capgo/capacitor/camera/preview/LightingQualityDetector.java`
* `ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift`
* `ios/Sources/CapgoCameraPreviewPlugin/FaceAlignmentValidator.swift`
* `ios/Sources/CapgoCameraPreviewPlugin/FaceCoordinateTransformer.swift`
* `ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift`
* `ios/Sources/CapgoCameraPreviewPlugin/LightingQualityDetector.swift`
* `ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift`
riderx and others added 5 commits December 19, 2025 12:13
…n tracking

- Added FaceDetectionManager for iOS and Android
- Added FaceAlignmentValidator for face positioning validation
- Added FaceCoordinateTransformer for coordinate transformations
- Added LightingQualityDetector for image quality assessment
- Added face-detection demo page and components in example app
- Added face alignment service
- Updated camera preview plugin with face detection methods
- Resolved conflicts with main branch updates
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)

442-484: Resolution change affects all camera usage, not just face detection.

The preset configuration now defaults to 720p for all camera sessions, regardless of whether face detection is active. This prioritizes face detection performance but may degrade quality for users who don't use face detection features.

Consider making the resolution choice conditional:

  • Use 720p when faceDetectionManager?.isRunning() == true
  • Otherwise, maintain the previous higher-quality defaults
🔎 Suggested conditional preset selection
-        // For face detection, prioritize 720p for optimal performance
-        // Higher resolutions (1080p, 4K) slow down Vision framework without improving accuracy
-        var targetPreset: AVCaptureSession.Preset = .hd1280x720
+        // Default to high quality; downgrade for face detection if needed
+        let useFaceDetectionPreset = faceDetectionManager?.isRunning() ?? false
+        var targetPreset: AVCaptureSession.Preset = useFaceDetectionPreset ? .hd1280x720 : .high
         
         if let aspectRatio = aspectRatio {
             switch aspectRatio {
             case "16:9":
-                // Use 720p for face detection - best balance of quality and performance
-                if captureSession.canSetSessionPreset(.hd1280x720) {
+                if useFaceDetectionPreset && captureSession.canSetSessionPreset(.hd1280x720) {
                     targetPreset = .hd1280x720
-                } else if captureSession.canSetSessionPreset(.hd1920x1080) {
+                } else if captureSession.canSetSessionPreset(.hd1920x1080) {
                     targetPreset = .hd1920x1080
+                } else if captureSession.canSetSessionPreset(.hd1280x720) {
+                    targetPreset = .hd1280x720
                 } else if captureSession.canSetSessionPreset(.high) {
                     targetPreset = .high
                 }
♻️ Duplicate comments (1)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)

519-536: Aspect-ratio update still broken due to incorrect superlayer cast.

As flagged in previous reviews, the cast previewLayer.superlayer as? UIView will always fail because superlayer is a CALayer, not a UIView. Runtime aspect-ratio changes remain non-functional.

🔎 Fix: Use superlayer bounds directly
-        DispatchQueue.main.async { [weak self, aspectRatio] in
-            guard let self = self, let previewLayer = self.previewLayer else { return }
-            
-            // Reconfigure preview layer frame based on aspect ratio
-            if let view = previewLayer.superlayer as? UIView {
-                if let aspectRatio = aspectRatio {
-                    let frame = self.calculateAspectRatioFrame(for: aspectRatio, in: view.bounds)
-                    previewLayer.frame = frame
-                    previewLayer.videoGravity = .resizeAspectFill
-                } else {
-                    previewLayer.frame = view.bounds
-                    previewLayer.videoGravity = .resizeAspect
-                }
-                
-                // Keep grid overlay in sync
-                self.gridOverlayView?.frame = previewLayer.frame
-            }
-        }
+        DispatchQueue.main.async { [weak self, aspectRatio] in
+            guard let self = self,
+                  let previewLayer = self.previewLayer,
+                  let superlayer = previewLayer.superlayer else { return }
+
+            let bounds = superlayer.bounds
+            if let aspectRatio = aspectRatio {
+                let frame = self.calculateAspectRatioFrame(for: aspectRatio, in: bounds)
+                previewLayer.frame = frame
+                previewLayer.videoGravity = .resizeAspectFill
+            } else {
+                previewLayer.frame = bounds
+                previewLayer.videoGravity = .resizeAspect
+            }
+
+            // Keep grid overlay in sync
+            self.gridOverlayView?.frame = previewLayer.frame
+        }
🧹 Nitpick comments (3)
CapgoCameraPreview.podspec (1)

17-17: Remove extra blank line at block body end.

RuboCop detected an extra empty line at the end of the block body. Consider removing line 17 to align with Ruby style conventions.

🔎 Proposed fix
 s.swift_version = '5.1'
-
 end
src/web.ts (1)

1260-1263: Use FaceDetectionOptions type instead of any.

The parameter is typed as any, which bypasses TypeScript's type safety. According to the AI summary, FaceDetectionOptions is defined in src/definitions.ts. Consider using the proper type for better type checking and IDE support.

🔎 Proposed fix
-  async startFaceDetection(_config?: any): Promise<void> {
+  async startFaceDetection(_config?: FaceDetectionOptions): Promise<void> {
     void _config;
     throw new Error('Face detection is not supported on the web platform');
   }

Ensure FaceDetectionOptions is imported at the top of the file from ./definitions.

ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)

2418-2420: Consider propagating detection failures to JavaScript.

The faceDetectionDidFail method currently only logs errors locally. Consider notifying the JavaScript layer so the application can respond to detection failures (e.g., show user feedback, retry logic, or adjust UI).

🔎 Proposed enhancement to propagate failures
 func faceDetectionDidFail(error: Error) {
     print("[FaceDetection] Detection failed: \(error.localizedDescription)")
+    notifyListeners("faceDetectionError", data: [
+        "message": error.localizedDescription
+    ])
 }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 311a50f and a10a003.

⛔ Files ignored due to path filters (2)
  • bun.lock is excluded by !**/*.lock
  • example-app/bun.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • CapgoCameraPreview.podspec (1 hunks)
  • android/build.gradle (1 hunks)
  • example-app/package.json (2 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (5 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (7 hunks)
  • package.json (4 hunks)
  • src/web.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • example-app/package.json
  • android/build.gradle
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T15:37:50.891Z
Learnt from: Dalanir
Repo: Cap-go/capacitor-camera-preview PR: 217
File: ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift:1242-1246
Timestamp: 2025-08-29T15:37:50.891Z
Learning: In the iOS CameraController.swift handleTap method, the team wants to maintain the current behavior that forces exposureMode to .autoExpose and resets exposureTargetBias to 0.0 when tapping to focus, even though it overrides the user's current exposure mode setting.

Applied to files:

  • ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift
  • ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift
🧬 Code graph analysis (2)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (2)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (4)
  • onAppForeground (436-440)
  • onAppBackground (429-432)
  • isRunning (103-105)
  • start (75-80)
src/definitions.ts (1)
  • FaceDetectionOptions (384-424)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (2)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)
  • isRunning (1535-1538)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (2)
  • isRunning (103-105)
  • processSampleBuffer (111-178)
🪛 RuboCop (1.81.7)
CapgoCameraPreview.podspec

[convention] 17-18: Extra empty line detected at block body end.

(Layout/EmptyLinesAroundBlockBody)

🔇 Additional comments (8)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (5)

37-37: LGTM: Version bump aligns with new face detection features.

The version increment from 8.0.3 to 8.0.5 appropriately reflects the addition of face detection capabilities in this PR.


80-84: LGTM: Face detection APIs properly registered.

The three new face detection methods are correctly declared with proper return types.


546-547: LGTM: Lifecycle coordination properly implemented.

Face detection state is correctly synchronized with app lifecycle events:

  • Resume on appDidBecomeActive and appWillEnterForeground
  • Pause on appWillResignActive

Safe optional chaining prevents crashes if face detection is not active.

Also applies to: 556-559, 561-566


839-852: LGTM: Preview layering logic is clear and correct.

The explicit handling of toBack mode ensures proper z-ordering:

  • toBack=true: Preview behind web content with zPosition=-1000 and disabled user interaction
  • toBack=false: Preview above web content with zPosition=1000

This provides predictable layering behavior for different use cases.


879-879: LGTM: Observer registration is correct.

The appWillResignActive notification observer is properly registered and paired with its handler.

ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (3)

113-114: LGTM: Face detection manager property is appropriately declared.

The optional faceDetectionManager allows lazy initialization when face detection is requested.


592-620: LGTM: Initial zoom logic is correct.

The refactored zoom initialization properly handles the iOS 18+ display multiplier and validates zoom bounds.


2413-2424: LGTM: Face detection frame processing is correctly integrated.

Sample buffers are passed to the face detection manager when active, using safe optional chaining to prevent crashes when face detection is not running.

- Extract preview creation into createConfiguredPreview() helper method
- Apply ResolutionSelector with 720p target and aspect ratio strategy
- Set target rotation and frame rate (20-25 FPS) consistently
- Use helper in both rebindCameraWithAnalysis() and rebindCameraWithoutAnalysis()
- Prevents preview quality/resolution changes when enabling/disabling face detection
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (1)

765-770: Frame rate limit still applies unconditionally.

As noted in previous review comments, the 20-25 FPS target is applied to all preview sessions regardless of whether face detection is enabled. This limitation will affect preview smoothness for users who don't need face detection. The same unconditional limit also appears in the new createConfiguredPreview() helper at line 3605.

Consider making the frame rate limit conditional:

🔎 Proposed fix
 int rotation = previewView != null && previewView.getDisplay() != null
     ? previewView.getDisplay().getRotation()
     : android.view.Surface.ROTATION_0;

-// Configure preview with optimized frame rate for face detection (20-25 FPS)
-Preview preview = new Preview.Builder()
-    .setResolutionSelector(resolutionSelector)
-    .setTargetRotation(rotation)
-    .setTargetFrameRate(new Range<>(20, 25)) // 20-25 FPS for balanced performance
-    .build();
+Preview.Builder previewBuilder = new Preview.Builder()
+    .setResolutionSelector(resolutionSelector)
+    .setTargetRotation(rotation);
+
+// Only limit frame rate when face detection is active
+if (faceDetectionManager != null) {
+    previewBuilder.setTargetFrameRate(new Range<>(20, 25));
+}
+
+Preview preview = previewBuilder.build();

Apply the same pattern to createConfiguredPreview() at lines 3601-3606.

package.json (2)

46-46: Fix the prettier script to use the correct command.

The prettier-pretty-check command does not exist (the package is not available on npm). This will cause the lint and fmt scripts (lines 43-44) to fail when they invoke this script.

🔎 Proposed fix

Change the script back to use prettier:

-    "prettier": "prettier-pretty-check \"**/*.{css,html,ts,js,java}\" --plugin=prettier-plugin-java",
+    "prettier": "prettier \"**/*.{css,html,ts,js,java}\" --plugin=prettier-plugin-java",

The --check and --write flags passed from the lint and fmt scripts will work correctly with the standard prettier command.


71-71: Remove the non-existent package from devDependencies.

This is the same issue flagged in previous reviews: the prettier-pretty-check package (version ^0.2.0) does not exist on npm and will cause npm install to fail.

🧹 Nitpick comments (4)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (4)

149-151: Consider thread-safety for face detection fields.

The faceDetectionManager field is accessed from both the main thread (in enableFaceDetection/disableFaceDetection) and the camera executor thread (in the analyzer lambda at line 3550). Without synchronization, this could lead to visibility issues or race conditions if face detection is toggled while frames are being processed.

Consider using volatile or proper synchronization:

🔎 Suggested fix
 // Face Detection
-private FaceDetectionManager faceDetectionManager;
+private volatile FaceDetectionManager faceDetectionManager;
 private androidx.camera.core.ImageAnalysis imageAnalysis;

3562-3569: Consider nulling imageAnalysis when disabling face detection.

The disableFaceDetection() method clears the analyzer but doesn't set imageAnalysis to null. This leaves the ImageAnalysis use case allocated even when face detection is disabled, which may hold unnecessary resources.

🔎 Suggested improvement
 public void disableFaceDetection() {
     this.faceDetectionManager = null;
     if (imageAnalysis != null) {
         imageAnalysis.clearAnalyzer();
+        imageAnalysis = null;
     }
     // Rebind camera without ImageAnalysis
     rebindCameraWithoutAnalysis();
 }

3609-3641: Add null check for previewView before accessing surface provider.

Both rebindCameraWithAnalysis() and rebindCameraWithoutAnalysis() call previewView.getSurfaceProvider() at lines 3620 and 3654 without explicitly checking if previewView is non-null. While the isRunning check likely implies previewView exists, adding an explicit null check would make the code more defensive and prevent potential NPEs if the lifecycle state is inconsistent.

🔎 Suggested fix
 private void rebindCameraWithAnalysis() {
-    if (cameraProvider == null || !isRunning) {
+    if (cameraProvider == null || !isRunning || previewView == null) {
         return;
     }

     try {
         // Unbind all
         cameraProvider.unbindAll();

         // Rebuild preview with proper configuration
         Preview preview = createConfiguredPreview();
         preview.setSurfaceProvider(previewView.getSurfaceProvider());
         // ...

Apply the same fix to rebindCameraWithoutAnalysis().

Also applies to: 3643-3672


3614-3640: Consider propagating rebind errors to the listener.

Both rebind methods catch exceptions and only log them without notifying the listener. If rebinding fails (e.g., due to camera access issues), the app remains in an inconsistent state with face detection enabled/disabled but the camera not properly rebound. Consider calling listener.onCameraStartError() to inform the app layer.

🔎 Suggested improvement
     } catch (Exception e) {
         Log.e(TAG, "Failed to rebind camera with analysis", e);
+        if (listener != null) {
+            listener.onCameraStartError("Failed to rebind camera with face detection: " + e.getMessage());
+        }
     }
 }

Apply similar logic to both rebind methods.

Also applies to: 3648-3671

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a10a003 and 7160934.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (4 hunks)
  • package.json (4 hunks)
🔇 Additional comments (2)
android/src/main/java/app/capgo/capacitor/camera/preview/CameraXView.java (1)

3571-3607: Well done addressing the previous review feedback.

The new createConfiguredPreview() helper successfully addresses the major issue raised in previous reviews about losing preview configuration (ResolutionSelector, TargetRotation, TargetFrameRate) when rebinding the camera for face detection. This helper ensures consistent preview behavior across different binding states.

package.json (1)

13-13: Good addition for package discoverability.

Adding the homepage field improves the package metadata and helps users find documentation.

Comment on lines +69 to +70
"prettier-plugin-organize-imports": "^4.3.0",
"prettier-plugin-packagejson": "^2.5.20",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for prettier configuration files
echo "=== Looking for prettier configuration files ==="
find . -maxdepth 2 -type f \( -name ".prettierrc*" -o -name "prettier.config.*" \) 2>/dev/null

echo ""
echo "=== Checking package.json for prettier config and scripts ==="
if [ -f package.json ]; then
  cat package.json | jq '.prettier, .scripts.prettier' 2>/dev/null || echo "No jq, trying grep:"
  grep -A 20 '"prettier"' package.json 2>/dev/null | head -30
fi

echo ""
echo "=== Checking if @ionic/prettier-config is referenced ==="
grep -r "@ionic/prettier-config" . --include="*.json" --include="*.js" -n 2>/dev/null | head -20

echo ""
echo "=== Searching for plugin references in prettier config ==="
grep -r "plugins" . -n --include=".prettierrc*" --include="prettier.config.*" 2>/dev/null

echo ""
echo "=== Looking for organize-imports or packagejson plugin mentions ==="
grep -r "organize-imports\|packagejson" . -n --include=".prettierrc*" --include="prettier.config.*" --include="package.json" 2>/dev/null

Repository: Cap-go/capacitor-camera-preview

Length of output: 2292


Configure the newly added prettier plugins or remove them as unused dependencies.

The prettier-plugin-organize-imports and prettier-plugin-packagejson packages have been added to devDependencies but are not configured in the project's prettier setup. Prettier version 3 requires explicit plugin configuration, and these plugins are not declared in the prettier configuration (which uses @ionic/prettier-config), the CLI script (which only specifies --plugin=prettier-plugin-java), or any local .prettierrc/prettier.config.js file. Either add them to the prettier configuration or remove them as unused dependencies.

jokabuyasina and others added 2 commits December 20, 2025 03:25
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Add detectClassifications option to enable smiling and eye open detection
- Enable CLASSIFICATION_MODE_ALL when detectClassifications is true
- Include smilingProbability, leftEyeOpenProbability, rightEyeOpenProbability in JSON output
- Clamp frameSkipCount to non-negative value to prevent division by zero
- Use Math.max(0, frameSkipCount) before computing divisor for modulo operation
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (3)

166-167: Data race on frame dimensions (still unresolved from previous review).

currentFrameWidth and currentFrameHeight are written on detectionQueue here but read on resultQueue (lines 248-249) without synchronization. This can result in stale or torn values.

As suggested in the previous review, pass frameWidth and frameHeight directly through the completion handler closure to avoid shared mutable state.

🔎 Proposed fix

Update the detection request to capture frame dimensions:

         // Process asynchronously on background queue
         detectionQueue.async { [weak self] in
             guard let self = self, !self.isCancelled else {
                 self?.decrementProcessingCount()
                 return
             }
             
-            // Store frame dimensions for coordinate normalization
-            self.currentFrameWidth = frameWidth
-            self.currentFrameHeight = frameHeight
-            
             // Create face detection request
-            let request = self.createFaceDetectionRequest()
+            let request = self.createFaceDetectionRequest(frameWidth: frameWidth, frameHeight: frameHeight)

Then modify createFaceDetectionRequest to accept and capture these parameters:

private func createFaceDetectionRequest(frameWidth: CGFloat, frameHeight: CGFloat) -> VNDetectFaceLandmarksRequest {
    let request = VNDetectFaceLandmarksRequest { [weak self] request, error in
        // ... existing error handling ...
        self.handleDetectionResults(request.results as? [VNFaceObservation] ?? [], 
                                   frameWidth: frameWidth, 
                                   frameHeight: frameHeight)
    }
    // ... rest of configuration ...
}

Update handleDetectionResults signature to accept dimensions as parameters and remove references to self.currentFrameWidth/Height.


197-200: Delegate callback not dispatched to main thread.

The faceDetectionDidFail call on line 199 is invoked directly from the Vision completion handler (which runs on detectionQueue), while the same delegate method at lines 178-180 is properly dispatched to main thread. This inconsistency can cause issues for delegates expecting main-thread callbacks.

🔎 Proposed fix
             if let error = error {
                 print("[FaceDetection] Detection error: \(error)")
-                self.delegate?.faceDetectionDidFail(error: error)
+                if !self.isCancelled {
+                    DispatchQueue.main.async {
+                        self.delegate?.faceDetectionDidFail(error: error)
+                    }
+                }
                 return
             }

334-336: Mouth landmarks produce duplicate coordinates (still unresolved from previous review).

All three mouth landmarks (mouthLeft, mouthRight, mouthBottom) call getPoint(from: faceLandmarks.outerLips), which returns the first point from the same region. This results in identical coordinates for all three landmarks, which is incorrect.

Consider extracting distinct points from the outerLips.normalizedPoints array:

🔎 Proposed fix
+        // Helper to get specific point from landmark region by index
+        func getPointAt(from region: VNFaceLandmarkRegion2D?, index: Int) -> Point? {
+            guard let region = region, region.pointCount > index else { return nil }
+            return normalizePoint(region.normalizedPoints[index])
+        }
+        
         return FaceLandmarks(
             leftEye: getPoint(from: faceLandmarks.leftEye),
             rightEye: getPoint(from: faceLandmarks.rightEye),
             noseBase: getPoint(from: faceLandmarks.nose),
-            mouthLeft: getPoint(from: faceLandmarks.outerLips), // Approximate
-            mouthRight: getPoint(from: faceLandmarks.outerLips), // Approximate
-            mouthBottom: getPoint(from: faceLandmarks.outerLips), // Approximate
+            mouthLeft: getPointAt(from: faceLandmarks.outerLips, index: 0),
+            mouthRight: getPointAt(from: faceLandmarks.outerLips, index: faceLandmarks.outerLips?.pointCount ?? 0 > 6 ? 6 : 0),
+            mouthBottom: getPointAt(from: faceLandmarks.outerLips, index: faceLandmarks.outerLips?.pointCount ?? 0 > 3 ? 3 : 0),
             leftEar: nil, // Not reliably detected by Vision
             rightEar: nil,
             leftCheek: nil, // Not available in Vision
             rightCheek: nil
         )

Alternatively, if distinct mouth points cannot be reliably extracted, set these to nil and document the limitation.

🧹 Nitpick comments (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)

290-290: Redundant nil initialization.

Optional variables in Swift are automatically initialized to nil, making the explicit assignment redundant.

🔎 Proposed fix
-        var landmarks: FaceLandmarks? = nil
+        var landmarks: FaceLandmarks?

Based on learnings, static analysis tools can help identify such style issues.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7160934 and e392a7e.

📒 Files selected for processing (1)
  • ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1 hunks)
🧰 Additional context used
🪛 SwiftLint (0.57.0)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift

[Warning] 290-290: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)

🔇 Additional comments (5)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (5)

1-68: LGTM!

The protocol definition, property declarations, and initialization are well-structured. The use of dedicated queues for detection and result processing shows good concurrency design.


223-260: Result handling logic is well-structured.

The filtering by max faces and minimum face size, along with proper delegate dispatch to main thread (lines 254-258), demonstrates good design. The data race issue with frame dimensions is covered separately.


364-406: LGTM!

The helper methods are well-implemented. The pitch estimation fallback for iOS < 15 is a thoughtful addition, and decrementProcessingCount properly uses the lock for thread safety.


415-480: LGTM!

Motion detection, app lifecycle, and thermal management methods are well-implemented. The frame skipping and motion detection features provide good power management capabilities.


485-536: LGTM!

The data models are well-structured and properly use Codable for serialization. The structure clearly represents the face detection results with appropriate optional fields.

- Remove unconditional call to configureFrameRateForFaceDetection() in configureSessionPreset()
- Add state variables to save device's original activeFormat and frame durations
- Update configureFrameRateForFaceDetection() to save original settings before modifying
- Add restoreOriginalFrameRate() method to restore saved settings and clear state
- Call configureFrameRateForFaceDetection() when face detection starts
- Call restoreOriginalFrameRate() when face detection stops
- Prevents 25 FPS cap from affecting camera when face detection is not in use
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)

119-141: Consider shutting down the executor in stopDetection().

The ExecutorService created in the constructor (line 74) is never explicitly shut down. If FaceDetectionManager instances are short-lived or recreated, this will leak threads. Consider adding executorService.shutdown() after waiting for in-flight tasks to complete.

🔎 Suggested enhancement
         if (faceDetector != null) {
             faceDetector.close();
             faceDetector = null;
         }
+        
+        // Shutdown executor to prevent thread leaks
+        if (executorService != null && !executorService.isShutdown()) {
+            executorService.shutdown();
+        }
 
         Log.d(TAG, "Face detection stopped (" + processingCount.get() + " tasks still processing)");
     }

Note: If you shutdown the executor, you'll need to recreate it in startDetection() to allow restarting detection after stopping.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e392a7e and 52668db.

📒 Files selected for processing (1)
  • android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (8)
  • hasSignificantMotion (415-432)
  • getTrackingId (348-358)
  • extractLandmarks (314-342)
  • onAppBackground (437-440)
  • onAppForeground (444-448)
  • enableThermalThrottling (453-456)
  • disableThermalThrottling (459-462)
  • isThermalThrottlingActive (467-469)
🔇 Additional comments (2)
android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (2)

104-106: Previous review feedback addressed successfully.

The classification support and frame skip clamping issues from the previous review have been properly fixed:

  • detectClassifications is now parsed from options (line 256) and used to configure the detector (lines 104-106)
  • Classification probabilities are included in the JSON output when enabled (lines 346-359)
  • frameSkipCount is clamped to prevent division by zero (line 178)

Also applies to: 178-179, 256-256, 346-359


212-213: Use of experimental ImageProxy.getImage() API acknowledged.

The code uses @ExperimentalGetImage to access the experimental ImageProxy.getImage() method. This is the standard pattern endorsed by ML Kit documentation for integrating with CameraX. While the API is marked experimental, best practice is to avoid directly closing the Image and instead call ImageProxy.close() on the proxy from which the Image was retrieved.

jonthan kabuya and others added 7 commits December 20, 2025 03:33
- Call faceDetectionManager?.stop() to properly stop detection
- Set faceDetectionManager to nil to release resources
- Prevents resource leaks when camera is stopped or cleaned up
- Replace resultQueue.sync with resultQueue.async when clearing tracking state
- Prevents deadlock if stop() is called from main thread while resultQueue has work dispatching to main
- Allows stop() to return immediately without blocking the calling thread
…etectionManager.java

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Convert showLandmarks signal to use getter/setter pattern for ngModel compatibility
- Rename internal signal to showLandmarksSignal to avoid confusion
- Add getter that reads from signal and setter that updates signal
- Update template to use showLandmarksSignal() in @if condition
- Fixes two-way binding [(ngModel)] which expects a plain property, not a signal
- Add smilingProbability?: number for smiling detection
- Add leftEyeOpenProbability?: number for left eye state
- Add rightEyeOpenProbability?: number for right eye state
- All properties are optional and only available when detectClassifications is enabled
- Resolves TypeScript errors in face-filter-demo component
- Values range from 0.0 (not present) to 1.0 (definitely present)
- Pass frame dimensions through closures to avoid data race
- Dispatch faceDetectionDidFail callback to main thread
- Extract distinct mouth landmark points (left, right, bottom) instead of duplicates
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)

489-539: Aspect‑ratio preview update now correctly uses superlayer bounds

The revised updateAspectRatio block now bases the preview frame on previewLayer.superlayer?.bounds instead of attempting to cast superlayer to UIView, so runtime aspect-ratio changes will actually resize the preview and keep gridOverlayView in sync. This fixes the earlier no-op behavior and looks solid.

android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1)

53-62: Avoid storing closed ImageProxy instances for motion detection

hasSignificantMotion(...) still keeps a reference to an ImageProxy in lastProcessedImage and later calls lastProcessedImage.getImage():

  • Each frame’s ImageProxy is eventually closed in the success/failure listeners in processImageProxy(...).
  • On subsequent calls, lastProcessedImage will often refer to a proxy that has already been closed, so lastProcessedImage.getImage() can return null or fail, and the code assumes it’s usable (only a null check, not a “closed” check).
  • This is the same lifecycle problem previously flagged: storing ImageProxy across frames is unsafe once the camera pipeline is allowed to close/recycle buffers.

Given that this motion detector only needs a coarse signal, you can simplify and make it safe by tracking timestamps only (no retained ImageProxy):

Suggested timestamp‑only motion detection
-    private ImageProxy lastProcessedImage = null;
+    // Track timestamp of last processed frame for lightweight motion detection
+    private long lastProcessedTimestamp = 0;
@@
-    private boolean hasSignificantMotion(ImageProxy currentImage) {
-        if (lastProcessedImage == null) {
-            lastProcessedImage = currentImage;
-            return true; // Always process first frame
-        }
-
-        // Simple motion detection: compare Y plane brightness
-        try {
-            @androidx.camera.core.ExperimentalGetImage
-            android.media.Image current = currentImage.getImage();
-            @androidx.camera.core.ExperimentalGetImage
-            android.media.Image last = lastProcessedImage.getImage();
-
-            if (current == null || last == null) {
-                return true;
-            }
-
-            // Sample center region brightness
-            int width = currentImage.getWidth();
-            int height = currentImage.getHeight();
-            int centerX = width / 2;
-            int centerY = height / 2;
-            int sampleSize = Math.min(width, height) / 10; // 10% of smaller dimension
-
-            // For simplicity, assume motion if images are different objects
-            // More sophisticated: compare pixel data in region
-            boolean hasMotion = (current.getTimestamp() != last.getTimestamp());
-
-            lastProcessedImage = currentImage;
-            return hasMotion;
-        } catch (Exception e) {
-            Log.w(TAG, "Motion detection failed, assuming motion", e);
-            return true; // Fail-safe: process frame
-        }
-    }
+    private boolean hasSignificantMotion(ImageProxy currentImage) {
+        @androidx.camera.core.ExperimentalGetImage
+        android.media.Image current = currentImage.getImage();
+        if (current == null) {
+            // If we can't read an image, err on the side of processing
+            return true;
+        }
+
+        long currentTimestamp = current.getTimestamp();
+        if (lastProcessedTimestamp == 0) {
+            lastProcessedTimestamp = currentTimestamp;
+            return true; // Always process first frame
+        }
+
+        boolean hasMotion = (currentTimestamp != lastProcessedTimestamp);
+        lastProcessedTimestamp = currentTimestamp;
+        return hasMotion;
+    }

Also applies to: 420-461

🧹 Nitpick comments (3)
example-app/src/app/pages/camera-view/camera-view.page.ts (2)

677-684: Consider removing unsupported detectClassifications option.

Line 681 passes detectClassifications: true, but according to past reviews, classification fields (smilingProbability, leftEyeOpenProbability, rightEyeOpenProbability) are not populated by the native implementations. While this won't cause errors, it may mislead future maintainers into expecting these fields to be available.

Based on learnings, classification fields are not currently supported in the native Android and iOS implementations.

🔎 Proposed refactor
       await this.#cameraViewService.startFaceDetection({
         performanceMode: this.faceDetectionPerformance(),
         trackingEnabled: this.trackFaces(),
         detectLandmarks: this.detectLandmarks(),
-        detectClassifications: true,
         maxFaces: this.maxFaces(),
         minFaceSize: 0.15,
       });

695-707: Consider adding a defensive check before cleanup.

If stopFaceDetection() is called when face detection was never started, calling removeFaceDetectionListener() on line 698 might fail or log errors. While typical usage flows through toggleFaceDetection() which checks faceDetectionRunning(), direct API calls could hit this edge case.

🔎 Proposed refactor
   protected async stopFaceDetection(): Promise<void> {
     try {
+      if (!this.faceDetectionRunning()) {
+        return;
+      }
       await this.#cameraViewService.stopFaceDetection();
       await this.#cameraViewService.removeFaceDetectionListener();
       this.faceDetectionRunning.set(false);
       this.detectedFaces.set([]);
       const results = this.testResults() + '\n✓ Face detection stopped';
       this.testResults.set(results);
     } catch (error) {
       const results = this.testResults() + `\n✗ Stop face detection failed: ${String(error)}`;
       this.testResults.set(results);
     }
   }
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)

293-293: Remove redundant nil initialization.

The explicit = nil is unnecessary for optional variables in Swift, as they default to nil.

🔎 Proposed fix
-        var landmarks: FaceLandmarks? = nil
+        var landmarks: FaceLandmarks?
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 52668db and 8df5890.

📒 Files selected for processing (9)
  • android/src/main/java/app/capgo/capacitor/camera/preview/FaceDetectionManager.java (1 hunks)
  • example-app/src/app/components/face-filter-demo/face-filter-demo.component.html (1 hunks)
  • example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (1 hunks)
  • example-app/src/app/pages/camera-view/camera-view.page.ts (7 hunks)
  • example-app/src/app/services/face-alignment.service.ts (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (5 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1 hunks)
  • ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (7 hunks)
  • src/definitions.ts (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • example-app/src/app/components/face-filter-demo/face-filter-demo.component.html
  • example-app/src/app/services/face-alignment.service.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T15:37:50.891Z
Learnt from: Dalanir
Repo: Cap-go/capacitor-camera-preview PR: 217
File: ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift:1242-1246
Timestamp: 2025-08-29T15:37:50.891Z
Learning: In the iOS CameraController.swift handleTap method, the team wants to maintain the current behavior that forces exposureMode to .autoExpose and resets exposureTargetBias to 0.0 when tapping to focus, even though it overrides the user's current exposure mode setting.

Applied to files:

  • ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift
  • ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift
🧬 Code graph analysis (3)
ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (3)
  • stop (80-104)
  • isRunning (109-111)
  • processSampleBuffer (117-180)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (2)
ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (2)
  • faceDetectionDidUpdate (2411-2423)
  • faceDetectionDidFail (2425-2427)
src/definitions.ts (6)
  • FaceDetectionOptions (384-424)
  • FaceDetectionResult (430-450)
  • FaceBounds (537-557)
  • DetectedFace (456-530)
  • Point (624-634)
  • FaceLandmarks (564-618)
example-app/src/app/pages/camera-view/camera-view.page.ts (4)
src/definitions.ts (2)
  • DetectedFace (456-530)
  • FaceDetectionResult (430-450)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (1)
  • isRunning (109-111)
example-app/src/app/core/capacitor-camera-preview.service.ts (1)
  • isRunning (77-79)
src/web.ts (1)
  • isRunning (721-724)
🪛 SwiftLint (0.57.0)
ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift

[Warning] 293-293: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)

🔇 Additional comments (25)
example-app/src/app/pages/camera-view/camera-view.page.ts (5)

42-43: LGTM: Face detection imports added correctly.

The imports for FaceDetectionResult and DetectedFace align with the new face detection methods, and RouterLink is properly registered in the component's imports array.

Also applies to: 48-48, 56-56


133-140: LGTM: Face detection state is well-structured.

The face detection configuration and state fields use appropriate defaults and Angular signals for reactive updates.


441-441: Good fix: Error logging now produces readable messages.

Changing from ${error} to ${String(error)} ensures Error objects are converted to meaningful strings instead of "[object Object]".

Also applies to: 455-455, 468-468, 481-481, 743-743


660-666: LGTM: Toggle logic is clean and correct.

The method appropriately checks the running state and calls the corresponding start/stop method.


709-753: LGTM: Test and update methods are well-implemented.

The testFaceDetection() method provides comprehensive diagnostics with clear user instructions, and displays face tracking data (position, size, angles) appropriately. The updateFaceDetectionResults() helper provides optional debug logging. Note that classification fields mentioned in past reviews are no longer accessed, addressing that concern.

ios/Sources/CapgoCameraPreviewPlugin/CameraController.swift (1)

2451-2470: Face‑detection hook in captureOutput integrates cleanly with existing flow

The delegate now forwards frames to faceDetectionManager.processSampleBuffer(_:) only when a manager exists and isRunning() is true, and it preserves the existing first‑frame callback/early‑return behavior. This keeps face detection orthogonal to sample capture and should not regress the existing captureSample path.

ios/Sources/CapgoCameraPreviewPlugin/Plugin.swift (1)

35-85: Face‑detection plugin bridge and lifecycle wiring look consistent

  • The new plugin methods (startFaceDetection, stopFaceDetection, isFaceDetectionRunning, getPluginVersion) are correctly exposed in pluginMethods and map cleanly to the native logic in CameraController / FaceDetectionManager.
  • App lifecycle handlers (appDidBecomeActive, appWillEnterForeground, appWillResignActive) forwarding to faceDetectionManager.onAppForeground/onAppBackground give iOS parity with the Android manager’s background/thermal throttling semantics.
  • completeStartCamera’s z‑position handling for toBack vs foreground preview is clear and keeps the transparency helpers intact.
  • The FaceDetectionDelegate extension encodes FaceDetectionResult and forwards it over the faceDetection event, matching the new TypeScript listener overload.

Aside from the frame‑rate restoration caveats noted in CameraController, this bridge layer looks coherent and ready to use.

Also applies to: 541-566, 835-909, 2352-2405, 2410-2427

src/definitions.ts (1)

456-530: Face‑detection typings and listener/method surface are well‑structured

  • The DetectedFace / FaceBounds / FaceLandmarks types with normalized coordinates and Euler angles align with the JSON produced on Android and what the iOS delegate forwards, which is ideal for AR overlays.
  • Separate FaceAlignmentFeedback and LightingQualityFeedback types give room for higher‑level UX helpers without bloating the core FaceDetectionResult.
  • The new addListener('faceDetection', ...) overload and the trio startFaceDetection, stopFaceDetection, isFaceDetectionRunning on CameraPreviewPlugin match the native API surface and are clearly documented with usage examples.

Once FaceDetectionOptions is extended to cover detectClassifications (and any other intended public flags), the TS side looks complete.

Also applies to: 636-711, 1064-1091, 1172-1231

ios/Sources/CapgoCameraPreviewPlugin/FaceDetectionManager.swift (11)

1-64: LGTM! Well-structured class setup.

The protocol definition, properties, and initialization are well-organized. The use of separate queues for detection and result handling with proper QoS settings is appropriate for real-time face tracking.


68-111: LGTM! Lifecycle methods properly handle synchronization.

The start(), stop(), and isRunning() methods correctly manage detection state. The stop() method properly synchronizes access to processingCount with locks and uses async dispatch to resultQueue to avoid potential deadlocks.


117-180: LGTM! Frame processing pipeline is robust.

The processSampleBuffer method correctly handles frame skipping, thermal throttling, and app lifecycle states. Frame dimensions are captured locally to avoid race conditions, and the asynchronous processing with proper locking around processingCount ensures thread safety.


190-218: LGTM! Vision request configuration is optimized for real-time performance.

The createFaceDetectionRequest method properly configures the Vision request with appropriate revision settings and GPU acceleration. Frame dimensions are passed through the closure to handleDetectionResults, avoiding shared mutable state.


226-263: LGTM! Result handling is thread-safe and properly filtered.

The handleDetectionResults method correctly processes face observations on the resultQueue, applies filtering for face count and size limits, and dispatches delegate callbacks to the main thread. Using passed parameters for frame dimensions avoids race conditions.


269-309: LGTM! Face conversion handles coordinate systems correctly.

The convertToDetectedFace method properly converts Vision's bottom-left coordinate system to top-left, extracts rotation angles with appropriate iOS version checks, and includes a fallback for pitch estimation on older iOS versions.


317-351: LGTM! Landmark extraction is well-implemented.

The extractLandmarks method correctly transforms Vision's landmark coordinates to normalized screen coordinates. The use of specific indices for mouth corners (fixed from the previous review) provides distinct points, and the documentation of Vision framework limitations for ears and cheeks is helpful.


357-367: LGTM! Face tracking ID management is thread-safe.

The getTrackingId method uses Vision's UUID-based face tracking to maintain consistent IDs across frames. All accesses to faceTrackingMap occur on the serial resultQueue, ensuring thread safety.


373-441: LGTM! Helper methods are well-implemented.

The utility methods are straightforward and correct. The estimatePitchFromLandmarks provides a reasonable fallback for iOS < 15, decrementProcessingCount properly uses locking, and the motion detection is appropriate for the power management use case.


446-489: LGTM! Power and thermal management is straightforward.

The lifecycle hooks and power management methods provide appropriate controls for optimizing battery usage and handling thermal conditions. The logging statements are helpful for debugging.


494-545: LGTM! Data models align with cross-platform type definitions.

The Swift data models correctly mirror the TypeScript interfaces defined in src/definitions.ts. The Codable conformance enables seamless JSON serialization for the native-to-JS bridge.

example-app/src/app/components/face-filter-demo/face-filter-demo.component.ts (6)

1-42: LGTM! Component setup is clean and follows Angular standalone best practices.

The imports and component decorator are well-organized. The standalone component pattern with explicit imports is the modern Angular approach.


43-59: LGTM! Reactive state management with signals is well-implemented.

The use of Angular signals for reactive state and the getter/setter pattern to enable two-way binding with ngModel is a good approach. The loose typing of faceDetectionListener as any is acceptable for demo code.


61-94: LGTM! Lifecycle management is correct.

The component properly requests permissions, initializes the camera, and starts face detection in ngOnInit. The ngOnDestroy cleanup correctly stops detection and the camera. Error handling with console.error is appropriate for a demo component.


96-158: LGTM! Face detection control methods are well-implemented.

The startFaceDetection method correctly configures options and sets up the listener to update reactive signals. The stopFaceDetection method properly cleans up resources, and the toggle logic is straightforward. The reference to smilingProbability is valid as it's defined in the DetectedFace interface.


160-162: LGTM! Modal dismissal follows standard Ionic patterns.

The close method correctly uses the ModalController to dismiss the modal.


167-198: LGTM! UI helper methods correctly transform normalized coordinates.

The helper methods properly convert normalized coordinates to CSS percentage values for positioning and sizing face overlays. The inclusion of rotation support in getFaceTransform is appropriate for accurate overlay alignment.

@riderx
Copy link
Member

riderx commented Dec 20, 2025

there is many file deleted again where not supposed to

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants