The foremost data activist of our time, Dr Cathy O’Neil was the keynote speaker for the Algorithmic Inequality track of SHIFT 2018. Founder of ORCAA, a company that specializes in algorithmic auditing and risk consulting, and author of the best-selling Weapons of Math Destruction, O’Neil is working towards exposing the malicious effects some algorithms have on society.
While algorithms are undeniably a great tool that can help us overcome obstacles that a generation ago were insurmountable, O’Neil warns us that left alone and unmonitored, they have the power to wreak havoc on our society – to become weapons of math destruction. In other words, the problem is not that algorithms exist; it is how, where, why and by whom the data is used.
O’Neil aims to shed light on some of the dangers hidden right under our noses, because that is precisely what they are: hidden. Algorithms run behind the scenes within automated decision making processes, and often, we are not aware of any of this happening. These processes are used in social media and targeted marketing, but also by instances like insurance companies, schools, banks and even courts of justice that we trust implicitly to treat us fairly when it comes to decisions like who will be approved for a mortgage.
Because they are man-made, algorithms may reflect the prejudices held by the people who created them. As long as there is a human component in making decisions, these prejudices can be weeded out during the process to promote equality. However, when the whole process is automated without carefully considering all of the many variables involved, it becomes prone to making discriminating and racist decisions, making lucky people even luckier and the unlucky ones, in turn, unluckier.
Automated decision making is not something that we should stop entirely, but we must keep our eyes open when we are doing it. O’Neil’s goal is to encourage people to ask, instead of whether an algorithm is effective, questions about whether it is lawful and fair and what information was used in making a decision. The problem is that the phenomenon is new, the terminology is ambiguous and the majority of people lack sufficient understanding of the mathematical models involved. More fundamentally, it’s hard to detect a flaw in the system when on the surface everything seems to be working just fine.
O’Neil herself realized that something was wrong when she went to work in finance in early 2007. She saw the financial crises all over the world at that time as a garish example of algorithms and people’s trust in mathematics misused. As a mathematician, she felt offended and decided to change careers for something less harmful, taking up data science. Instead of finding relief in her new career, however, O’Neil soon discovered that the same algorithmic problems were happening all across society. Even worse, they were less obvious, making them all the more dangerous: hidden from plain view, they were silently undermining the whole idea of equal opportunities for all.
O’Neil took it upon herself to break the silence and raise public awareness about our failures in automated decision making, hoping to create pressure that would work towards the creation of better algorithms. According to her, at this moment the bar tends to be set low: anything more accurate than random can and will be used to improve a company’s performance, resulting in a Wild West of algorithms.
The age of the algorithm caught us off guard, and now we have to catch up with standards and regulations. O’Neil has introduced a model for assessing the quality of algorithms, called the ethical matrix*. She also calls for a hippocratic oath for data scientists*, as well as an independent third party, an “FDA for algorithms”* to check the legality and fairness of our decision making machinery. The challenge lies in that the authority would need to understand both equality and the technical aspects of algorithms, and at the moment it is still difficult to find experts versed in both.
To get things started, O’Neil has founded ORCAA to help small and medium-sized businesses that want to be able to prove the appropriateness and lawfulness of their algorithms – much like food producers may work towards being able to use Fair Trade and Organic stamps. While it is the big fish that she hopes to catch in the end, making selling points out of fairness and accountability is a good start. A demand for transparency is the first step towards algorithmic equality.
* For more details, read O’Neil’s interview in Wired.