Skip to content
This repository has been archived by the owner on Oct 10, 2024. It is now read-only.
This repository has been archived by the owner on Oct 10, 2024. It is now read-only.

[FEATURE] Improve notifications by context checking #761

Open
@eddiejaoude

Description

Description

The toxicity model detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language

https://github.com/tensorflow/tfjs-models/blob/master/toxicity/README.md

Screenshots

Screenshot 2022-10-27 at 23 45 35

Additional information

No response

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions