This repository has been archived by the owner on Oct 10, 2024. It is now read-only.
This repository has been archived by the owner on Oct 10, 2024. It is now read-only.
[FEATURE] Improve notifications by context checking #761
Open
Description
Description
The toxicity model detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language
https://github.com/tensorflow/tfjs-models/blob/master/toxicity/README.md
Screenshots
Additional information
No response