Description
Hi,
I'm quite new to topic modelling and I've been working on a particular project with a very large corpus. Performing LDA using gibb-sampler is out of the question (atleast not for cross-validation due to computational constraints). Warp-LDA is the only viable option.
I've been trying to select topic number (k) using various measures. I tried using perplexity but it just seems to keep on decreasing with increasing k and I couldn't identify a clear cut off or elbow. Then I tried coherence measures and I scaled these measures and I've plotted them against each other. Can anyone help me identify what exactly are these measures telling us. Is there any particular k that seems of interest?
Also, any form of help as to how should I approach this would be fantastic. Below are the values I used for other model parameters:
doc_topic_prior = 0.1, #0.1
topic_word_prior = 0.01