A word embedding algorithm that can fit into the memory without features loss: for better performance and memory optimization #2066
Unanswered
HebaGamalElDin
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
what examples for a word embeddings algorithms that cares about the memory size without any features loss?
Hint: TFIDF with my data of the shape (500000 Row) gives an array of shape (500000, 44754) and it gives memory allocation error, if i specify a max_features paramter so i'll lose very important features in my case (i can't ignore even any word in the dataset)
Beta Was this translation helpful? Give feedback.
All reactions