Using deep learning to detect the level of toxicity on a particular comment. Now-a-days, derogatory comments are often made by one another, not only in offline environment but also immensely in online environments like social networking websites and online communities. So, an Identification combinedwith Prevention System in allsocial networking websites and applications, including all the communities, existing in the digital world is a necessity. In such a system, the Identification Block should identify any negative online behaviour and should signal the Prevention Block to take action accordingly. This study aims to analyse any piece of text and detecting different types of toxicity like obscenity, threats, insults and identity-based hatred.
-
Notifications
You must be signed in to change notification settings - Fork 0
Nonso-M/CommentToxicity
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Using deep learning to detect the level of toxicity on a particular comment
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published