Dominick Lim

home - projects - github

Minority Report

Minority Report

Teammates: Torin Rudeen

When it comes to AI, I'm not worried about it taking over the world or destroying mankind. I do worry that we will put too much faith in it, though. It's easy to believe that all machine learning algorithms must be fair; after all, they're simply mathematical algorithms. However, machine learning algorithms can reflect or even magnify discrimination in the data they are trained on. For example, criminal justice data is biased because racial minorities are disproportionately the subject of police attention. A naive predictor would likely learn the biases in the data, and would therefore perpetuate those biases.

The CS229 final project is an open-ended opportunity to apply machine learning algorithms to real-world tasks. For our final project, Minority Report, my partner and I analyzed the behavior of a Naive Bayes classifier on a racially diverse dataset, and found that it had a much higher false positive rate on the minority class than the majority class. We devised two approaches to equalize false positive rates while minimizing false negative rate.

Please see our project paper and poster.

Built with: GNU Octave, pandas - Python Data Analysis Library