This repository was archived by the owner on Jan 13, 2022. It is now read-only.

Description
There is a bug in the logic when removing titles with 'bad words'. Instead of searching for a bad word in the tokenized title, the whole title string is used.
This leads to the removal of all instances where the title contains the bigram 'ap' (or any other bad word as a subsequence). So all titles with e.g. the word 'Japan' are removed from the training set (approximately 700K totally fine instances).
Then source and target vocabularies are created based on the training set. Because the target vocabulary doesn't contain 'Japan' and bad word filtering is not applied to the test set, there are instances in the test set where the input includes 'Japan' and the output has it as 'UNK'. Just so that you know why there are so many UNKs in the test summaries.
|
if any((bad in title.lower() |