TrOCR: Transformer based model for state-of-the-art optical character recognition (OCR) on both printed and handwritten text
End-to-end text recognition approach with pre-trained image transformer and text transformer models for both image understanding and wordpiece-level text generation.
This is based on the implementation of TrOCR found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance across various devices, can be found here.
Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.
Install the package via pip:
# NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
pip install "qai-hub-models[trocr]"Sign-in to Qualcomm® AI Hub Workbench with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKENNavigate to docs for more information.
Run the following simple CLI demo to verify the model is working end to end:
python -m qai_hub_models.models.trocr.demoMore details on the CLI tool can be found with the --help option. See
demo.py for sample usage of the model including pre/post processing
scripts. Please refer to our general instructions on using
models for more usage instructions.
To run the model on Qualcomm® devices, you must export the model for use with an edge runtime such as TensorFlow Lite, ONNX Runtime, or Qualcomm AI Engine Direct. Use the following command to export the model:
python -m qai_hub_models.models.trocr.exportAdditional options are documented with the --help option.
- The license for the original implementation of TrOCR can be found here.
- TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models
- Source Model Implementation
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.