Skip to content

Move tokenizer information into pte to reduce ExecuTorch runner args #1484

Open
@Jack-Khuu

Description

🚀 The feature, motivation and pitch

After an ExecuTorch model is exported to a pte, tokenization information must be passed in as an arg (-l <#>) to the runner. This can be avoided by writing this information into the pte file itself since the tokenizer is known at export time (sentencepiece => 2, tiktoken =>3). Tokenization information can be stored during export as a constant_method.

For example: https://github.com/pytorch/torchchat?tab=readme-ov-file#deploy-and-run-on-android

cmake-out/et_run llama3.1.pte -z `python3 torchchat.py where llama3.1`/tokenizer.model -l 3 -i "Once upon a time"

Task:

  1. Update ExecuTorch exporting to save tokenization information in the pte artifact
  2. Update the ExecuTorch runner to read the newly saved metadata

For a similar optimization made for aoti: #1159.
See #1439 for conversation/more context

Alternatives

Continue to pass tokenizer arguments to the runner

Additional context

No response

RFC (Optional)

No response

Metadata

Assignees

No one assigned

    Labels

    ExecuTorchIssues related to ExecuTorch installation, export, or build. Mobile uses separate tagsactionableItems in the backlog waiting for an appropriate impl/fixenhancementNew feature or requestgood first issueGood for newcomerstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions