Algorithms are everywhere, tirelessly learning our behaviors in order to predict our choices. As humans we love the convenience this provides, welcoming this new industrial revolution by applying the same “data oriented” mindset to every single aspect of our life with absolute trust. After all, if it’s based on math and scientific method, what could possibly go wrong?
A few weeks ago, Cathy O’Neil, a mathematician, former Wall Street quant, and math activist, came to the Berkeley Social Science Matrix to introduce her new book Weapons of Math Destruction. There, she accounted examples of distortions caused by algorithms — most particularly, the distortion of education in New York in 2010.
In 2010, the city of New York introduced the value added model (VAM) of teaching evaluation in order to bring scores up in primary education and to compete with countries such as China and South Korea. As a result, there was a rise in standardized testing and teachers were evaluated by how much they were contributing to the improvement of their students. The model gave each students a projected score each semester based on their past performance and if they failed to achieve it, the teacher received a lower score on their evaluation. The VAM, boasting about its scientific and objective nature, seemed to be a reasonable way to measure teachers’ performance and effectiveness — but was it really?
After collecting the historical teacher evaluation data, O’Neil showed that the evaluation scores were randomly distributed with no obvious trend. In other words, the chance of a teacher getting fired was analogous to the chance of pulling a winning ticket from the lottery — based purely on luck and not reason. The survey didn’t take into account that student performance could be influenced by other factors such as a student’s emotional state and physical health.
Time and time again, we see that fitting all these lively kids into cold and, more often than not, oversimplified statistics, risks creating misunderstandings as to what is truly important in education. Do we really want our kids to become test machines? Or do we really want all teachers focusing solely on scores just so they can keep their jobs? Who should take the role of inspiring a passion for knowledge and providing emotional and psychological guidance for kids during their critical formative years? These are the questions algorithms failed to model and hence failed to answer.
At the end of the day, the whole movement of using algorithms to increase the competitiveness of the teaching workforce and student performance ended up forcing great teachers to leave their profession. Those who desperately wanted to stay took desperate measures and attempted to leak exams; kids who went to schools in poor neighborhood were left without good teachers, yet teachers of those who could afford after school tutoring were spared, not necessarily because of their outstanding skills. The algorithm was biased and even accidentally classist, and no matter how well designed it was, the outcome remained — algorithms were widening an already large inequality gap by taking away the right to be educated from those who needed it most.
Education is merely one of the many cases where algorithms failed to portray certain characteristics of the society. In the rush for pushing data oriented policy making, banks are using credit score algorithms that discriminate against the poor and the police departments are using predictive algorithms that discriminate against the colored community. Maybe it’s time take the time to think about if we can really quantify every single aspect of our lives. The question is not whether we should be using algorithms but rather how to use them well. The technology is there but we just need to raise the awareness about the potential negative outcome algorithms might bring us and learn how to use algorithms to create a fair society.