From 9736fec77dd6b6ec072760531998462bd4d77c65 Mon Sep 17 00:00:00 2001 From: Yash Sonar <46718837+Yash-567@users.noreply.github.com> Date: Sat, 8 Aug 2020 11:09:39 +0530 Subject: [PATCH] Updated python-module.md Added a more clear understanding of model compression task for new developers to understand clearly. --- docs/python-module.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/python-module.md b/docs/python-module.md index 73a4d3c89..504ae75d4 100644 --- a/docs/python-module.md +++ b/docs/python-module.md @@ -146,6 +146,7 @@ For more information about text classification usage of fasttext, you can refer ### Compress model files with quantization When you want to save a supervised model file, fastText can compress it in order to have a much smaller model file by sacrificing only a little bit performance. +This compression is done with advance Vector quantization and Feature Selection algorithms to provide you the best performance by removing the features that do not benefit the performance upto the threshold. This leads to a much smaller model with less redundancies. ```py # with the previously trained `model` object, call :