Skip to content

Speed up im2col_encoding for GPUs #503

@dimk1

Description

@dimk1

Currently, in encrypted inference, we encrypt images one by one when calling ts.im2col_encoding() function and do encrypted inference on "model(context, x_enc, windows_nb)" on one sample x_enc. I think we have the most problematic performance bottleneck here. The acceleration effect of GPU is seen most when we do inference on batches of data and running model(batch_x), where batch_x is a 3D or 4D tensor (#num of samples, width, height). But now the #num_of_samples=1 in encrypted inference and the GPU utilization is very low. I looked if there is a way of "CKKSTensor - Batching" in TenSEAL, but I could not find any. Have you considered this feature as an improvement? It would speed up things a lot.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type: Improvement 📈Performance improvement not introducing a new feature or requiring a major refactor

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions