Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.
This repository was archived by the owner on Nov 16, 2023. It is now read-only.

Improve inference performance with loaded TransformerChain ML.NET model #370

Open
@najeeb-kazmi

Description

@najeeb-kazmi

PR #230 introduced ability to load and score ML.NET models trained in the new ML.NET TransformerChain serialization format. This was done by checking whether "TransformerChain" exists in the archive members. Currently, this is done every time test, predict, predict_proba, and decision_function methods call _predict. This can be improved by checking for "TransformerChain" only once when the model is loaded, improving inference performance.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions