Skip to content

restructure how models are loaded for core-slu-service. #26

@greed2411

Description

@greed2411

this requires discussion on how things models are going to be loaded in the future,

especially

  1. presently a single model/workflow is being loaded (predict_wrapper in endpoints.py) on startup via config.yaml .this is even before accessing active configs from builder backend via an API. this isn't aligning with fetching configs from builder and then create inference functions for each of those models. Therefore we need to have different PREDICT_API for each of the CLIENT_CONFIGS, and start rewriting for that way.
  2. continuing 1, means we need to load models only after receiving/creation of config from builder-backend. implies, no models will be loaded on startup (before collecting active/deployed configs from builder). This might be a breaking change to this repository.
  3. Can we break/fork out the repo since we are serving two masters (dialogy template & core slu) at the same time ? every move forward we have to think of backward compatibility.

Metadata

Metadata

Assignees

Labels

help wantedExtra attention is needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions