Skip to content

MengqinShen/Identify-and-classify-toxic-online-comments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Identify-and-classify-toxic-online-comments

This project was to build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate. For learning purpose, base model like NB, text CNN , and LSTM was built and tested. MOre advanced models like BERT was used to further improve model accuracy .

Generate Config

himl hiera/model=bert/ --output-file config/bert_config.yaml

Run

python run.py --config config/bert_config.yaml 

About

to build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate better than Perspective’s current models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages