Open
Description
In the notebooks in python-topic-model/notebook/
, there are no small examples provided of how to infer the topic distribution for a new document or for the documents that the model was trained on.
Something like giving a list of integers as input (that map to the words of voca
) for a new document, and getting the probability distribution that this document has for the trained topics. Or accessing the topics of all the trained documents.
How can this be achieved for lets say the LDA or the supervised LDA?
Metadata
Metadata
Assignees
Labels
No labels