Replies: 2 comments 1 reply
-
Hi! This is currently not possible but with the external app ecosystem on the horizon there are possibilities on the horizon for something like this. |
Beta Was this translation helpful? Give feedback.
-
I'am facing the same situation. Arguably a datagrave where Nextcloud lives on usually has no potent cpu nor gpu capabilities at all. For instance the app facerecognition has the ability to put all the ugly detection software in a separate docker container which communicates via API over network with the nextcloud app facerecognition This is in almost all cases the way to go:
Where the most important point really is to keep the nextcloud webserver clean from external software scattered inside the nextcloud application directory. This really is the last thing you ever want to deal with when updating stuff. I don't want to think about maintaining all the fragile tensorflow software along with the nextcloud instance holding all my precious data for the next 50 years. Bet this AI stuff won't even be maintained as long. A docker container can easily be thrown away when stuff inside goes grey. Think about cleaning the webserver host itself from all the proprietary unmanaged files. Sadly facerecognition isn't nearly good enough implmented to cover all the features of recognize. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I HAVE TWO VIRTUAL MACHINES, ONE WITH NEXTCLOUD INSTALLED, AND ONE WITH A GRAPHICS CARD PASS-THROUGH. I WANT RECOGNIZE TO BE ABLE TO REMOTELY CALL THE GPU ON THE OTHER VIRTUAL MACHINE TO COMPLETE THE HARDWARE ACCELERATION. Is there a tutorial for this method?
Beta Was this translation helpful? Give feedback.
All reactions