Technical Summary: capstone_technical_summary.pdf
code: cap_phase_1.ipynb
With the increasing use of online platforms for communication, the prevalence of toxic language and online abuse has become a major concern. To promote a safe and inclusive online environment, there is a need for an efficient and effective toxic words classification model that can identify and categorize toxic language in comments. The aim of this capstone project is to develop a model that can accurately classify comments into toxic and non-toxic categories, which can be used to improve content moderation, sentiment analysis, and chatbot performance.