Skip to content

A Python library for classifying toxic comments using deep learning. It supports detecting multiple types of toxicity including obscene language, threats, and identity hate.

License

Notifications You must be signed in to change notification settings

irfanalidv/toxic_comment_classifier

Repository files navigation

Toxic Comment Classifier

GitHub license Python version PyPI version PyPI Downloads

# Toxic Comment Classifier

A Python library for classifying toxic comments using deep learning.
It supports detecting multiple types of toxicity including obscene language, threats, and identity hate.

📦 Installation

pip install toxic-comment-classifier

🚀 Usage

🔹 Import and Initialize the Model

from toxic_classifier.model import ToxicCommentClassifier

# Load the classifier
model = ToxicCommentClassifier()

🔹 Classify a Single Comment

text = "You are so dumb and stupid!"
scores = model.classify(text)

scores

Example Output:

{'toxic': 0.9889402985572815,
 'severe_toxic': 0.07256772369146347,
 'obscene': 0.620429277420044,
 'threat': 0.01934845559298992,
 'insult': 0.8664075136184692,
 'identity_hate': 0.04072948172688484}

🔹 Get Overall Toxicity Score

toxicity = model.predict(text)
print(f"Overall Toxicity Score: {toxicity:.4f}")

Example Output:

Overall Toxicity Score: 0.4347

🔹 Classify Multiple Comments

texts = [
    "I hate you so much!",
    "This is wonderful news.",
    "You're disgusting!",
    "Absolutely love your energy!",
    "You're the worst person ever!",
    "Have a nice day :)"
]

scores = model.predict_batch(texts)

for txt, score in zip(texts, scores):
    print(f"Text: {txt} --> Toxicity Score: {score:.4f}")

Example Output:

Text: I hate you so much! --> Toxicity Score: 0.1395
Text: This is wonderful news. --> Toxicity Score: 0.0013
Text: You're disgusting! --> Toxicity Score: 0.3110
Text: Absolutely love your energy! --> Toxicity Score: 0.0088
Text: You're the worst person ever! --> Toxicity Score: 0.0937
Text: Have a nice day :) --> Toxicity Score: 0.0115

About

A Python library for classifying toxic comments using deep learning. It supports detecting multiple types of toxicity including obscene language, threats, and identity hate.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages