This repository was archived by the owner on Oct 31, 2023. It is now read-only.
Implement python wrappers for predictions and tune speed#279
Open
Bowid wants to merge 2 commits intofacebookresearch:mainfrom
Open
Implement python wrappers for predictions and tune speed#279Bowid wants to merge 2 commits intofacebookresearch:mainfrom
Bowid wants to merge 2 commits intofacebookresearch:mainfrom
Conversation
1. Predict now use std::set instead priority_queue. From my point of view it is better to maintain set of fixed size and drop predictions with too low score immideately instead storing them in ordered manner. 2. Implemented extension class of StarSpace to make things a bit more pythonic. It is definitely better to have return value instead of passing parameters to fill to functions. From my point of view, speed is not changed in case modern C++ compilers handle return of complex values properly without copying them.
Author
|
Also here is the sample, let it be here import starwrap
model_path = '/path/to/trained/model'
args = starwrap.args()
model = starwrap.starSpace(args)
model.initFromSavedModel(model_path)
line = input("Enter text for classification: ")
parsed = model.parseDoc(line, ' ')
# Output is list of tuples in format (probability, token_id)
# Predict takes two arguments: parsed doc and maximum predictions count
tokens = model.predict(parsed, 10)
probabilities = [item[0] for item in tokens]
# Token_id is integer, thus it should be rendered to feature (string)
features = model.renderTokens(tokens)
for feature, probability in zip(features, probabilities):
print('Feature {f} probability {p}'.format(f=feature, p=probability)) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
view it is better to maintain set of fixed size and drop predictions
with too low score immideately instead storing them in ordered manner.
pythonic. It is definitely better to have return value instead of
passing parameters to fill to functions. From my point of view, speed is
not changed in case modern C++ compilers handle return of complex values
properly without copying them.