Skip to content

Vision Service: Use an inference server or framework instead of custom microservices #1

@vigsterkr

Description

@vigsterkr

first of all kudos for photoprism! I think it is a great idea to factor out the ML part to a microservice, but i'm really wondering whether it is a good idea to develop an ML server from scratch. Especially looking at the current code, this one hard-codes models, and that is really not something that makes things scalable. there are plenty of mature ML serving development out there, TF has it's own TensorFlow Serving but that of course has its own problem, meaning it ties you to use a specific backend, namely TF.

https://github.com/roboflow/inference is something that comes to mind that is backend agnostic, but here's a quite good list of possible options: https://github.com/topics/inference-server

Metadata

Metadata

Assignees

Labels

aiArtificial Intelligence, Machine Learning (ML)epicComposed of smaller, releasable featureshelp wantedHelp with this would be much appreciated!python 🐍Python experience required

Projects

Status

Help Wanted

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions