Skip to content

[FEATURE] Off device transcription #412

@mikemachr

Description

@mikemachr

Feature Description

Live feed current output from recording into a model provider trough the network, to do off device transcription.

Problem Statement

Compute for good transcription models is heavy, particularly on portable machines.

Proposed Solution

Setting up like the summary options. One would choose a model provider (or point a localnetwork endpoint) to send the transcription requests to. Transcription is sent while recording, and received back when ready. It becomes transparent to the user

User Story

As a Macbook user,
I want to use off device transcription,
So that I can get the best transcriptions models without draining my battery using my device for compute

Acceptance Criteria

  • User can choose any provider they want or point a custom endpoint, just like the summarize feature
  • The transcription functionality remains working as expected, but the compute is non local

Technical Considerations

[Any technical details or considerations that should be taken into account]
Ordering of the chunks is important when sent trough the network

Alternatives Considered

[Describe any alternative solutions or features you've considered]
None

Additional Context

[Add any other context, screenshots, or mockups about the feature request here]

Checklist

  • I have searched for similar feature requests
  • I have provided all required information
  • I have included any relevant screenshots/mockups
  • I have described the problem and proposed solution clearly

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions