Skip to content

Probability distribution for a document #11

Open
@toltoxgh

Description

@toltoxgh

In the notebooks in python-topic-model/notebook/, there are no small examples provided of how to infer the topic distribution for a new document or for the documents that the model was trained on.

Something like giving a list of integers as input (that map to the words of voca) for a new document, and getting the probability distribution that this document has for the trained topics. Or accessing the topics of all the trained documents.

How can this be achieved for lets say the LDA or the supervised LDA?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions