-
-
Notifications
You must be signed in to change notification settings - Fork 171
Open
Labels
Type: Improvement 📈Performance improvement not introducing a new feature or requiring a major refactorPerformance improvement not introducing a new feature or requiring a major refactor
Description
Currently, in encrypted inference, we encrypt images one by one when calling ts.im2col_encoding() function and do encrypted inference on "model(context, x_enc, windows_nb)" on one sample x_enc. I think we have the most problematic performance bottleneck here. The acceleration effect of GPU is seen most when we do inference on batches of data and running model(batch_x), where batch_x is a 3D or 4D tensor (#num of samples, width, height). But now the #num_of_samples=1 in encrypted inference and the GPU utilization is very low. I looked if there is a way of "CKKSTensor - Batching" in TenSEAL, but I could not find any. Have you considered this feature as an improvement? It would speed up things a lot.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Type: Improvement 📈Performance improvement not introducing a new feature or requiring a major refactorPerformance improvement not introducing a new feature or requiring a major refactor