Skip to content

nicolarowe/I310D-bias

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

I310D-bias

CONTENT WARNING: both bias_analysis.ipynb and labeled_and_scored_comments.csv contain abusive and derogatory language.

The latter is not used for my analysis but is provided as examples of how the model works.

Purpose

The purpose of this project is to test the Perspective model for bias.

Hypothesis

Slurs against the European group will have a lower average identity_hate score than those against African, Asian, and Native American groups.

Methodology

To test this hypothesis, I selected slurs and epithets from Wikipedia with certain omission conditions. I then filtered out the ones that didn't work in API queries before analysis. bias_analysis.ipynb goes further into detail on how this was done.

Results

The Perspective model did score slurs against European people lower on average than those against the other groups. Other findings upon deeper analysis include concerning false negatives.

Personal Bias

Notably, my analysis of the API's response is subjective to my experiences as a white American. While I actively put in effort to understand the experiences of minority groups in America, that is not the same as living through them. As such, there may be nuances and implications I've missed in my analysis.

About

Testing the Perspective model for bias.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published