Description
Hi @tomas-gajarsky , I am integrating facetorch model into my system using API but facing a problems:
The API receives packets containing images compressed in base64 format. Therefore, I want to pass the image directly into the input for FaceAnalyzer.run
instead of providing a path to the image. I tried converting the image to a torch.Tensor
and passing it to the function via the tensor
parameter, but it did not work.
img_tensor = torch.from_numpy(img_rgb).permute(2, 0, 1).float() # (H, W, C) -> (C, H, W)
print(type(img_tensor))
response = analyzer.run(
tensor = img_tensor,
batch_size=cfg.batch_size,
fix_img_size=cfg.fix_img_size,
return_img_data=cfg.return_img_data,
include_tensors=cfg.include_tensors,
path_output=path_img_output,
)
The error is File "/app/facetorch/analyzer/reader/core.py", line 141, in run
data.img = torchvision.io.read_image(
File "/usr/local/lib/python3.10/site-packages/torchvision/io/image.py", line 275, in read_image
data = read_file(path)
File "/usr/local/lib/python3.10/site-packages/torchvision/io/image.py", line 52, in read_file
data = torch.ops.image.read_file(str(path))
File "/usr/local/lib/python3.10/site-packages/torch/ops.py", line 854, in call
return self._op(*args, **(kwargs or {}))
RuntimeError: [Errno 36] File name too long: 'tensor([[[134., 131., 136., ..., 199., 199., 199.],
[129., 130., 136., ..., 198., 198., 198.],
[127., 130., 135., ..., 198., 198., 198.],
Please help me solve this problem. Thanks you!