Skip to content

Add Tongue Tracking to MediaPipe Face Mesh and Blendshpaes #5857

Open
@gb2111

Description

@gb2111

MediaPipe Solution (you are using)

No response

Programming language

No response

Are you willing to contribute it

No

Describe the feature and the current behaviour/state

MediaPipe Face Mesh tracks facial features but lacks tongue tracking. Adding tongue-related blendshapes would improve mouth animation and interaction, enabling more accurate facial expression modeling.

Will this change the current API? How?

No response

Who will benefit with this feature?

No response

Please specify the use cases for this feature

VTubing & Avatars – Better lip-sync and expressions. Speech Therapy – Real-time tongue movement feedback. Accessibility – Tongue gestures for input control. Gaming & VR – Enhanced interactions and realism. Medical & Research – Improved speech studies and articulation tracking.

Any Other info

No response

Metadata

Metadata

Labels

stat:awaiting googlerWaiting for Google Engineer's Responsetask:face landmarkerIssues related to Face Landmarker: Identify facial features for visual effects and avatars.type:featureEnhancement in the New Functionality or Request for a New Solution

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions