Be The Change — Our Actions to Start Addressing Systemic Racism in A.I.
We believe in action. It took a little while, but we found an action that our business can take to support #BLM. Today, we are providing a guide, The 5 Steps to Create an Ethics Program at an A.I. Driven Company, for free (provided below, after the explanation).
As a former colleague stated in a recent Linkedin post, Tiffani Ashley Bell’s blog post on Medium should be required reading. Ms. Bell states, “I am reminded of Google’s photo categorization algorithms that classified Black people as gorillas. A more diverse team would have caught that before it went live.” Bias and discrimination are built into A.I. algorithms. Period. The team of engineers building the models and retraining them need to be diverse and inclusive. If an engineer doesn’t know to include photos of Black people for image recognition training, then the result will perpetuate bias and discrimination.
Let’s not forget the most recent company to come under criticism, Clearview AI, who scraped public face data to build its A.I. tools. This company’s services are used by law enforcement. Usually customers contribute to the accuracy of the A.I. models by providing feedback. Ideally, that feedback comes from a diverse and inclusive customer base who help identify and remove bias.
But bias can also be trained into the models because of the training data. We support the #ShutDownSTEM strike, which has a day of silence today in support of #BLM, because research has to remove bias from its published work. Commercial businesses use these published research papers to build commercial products, thus creating an endless loop of systemic bias. Working uncritically from past work and research only perpetuates racism.
A.I. algorithms are facilitating decisions for judges in court, police departments, military, etc. The scope of this technology cannot be denied and therefore the potential breadth of harm it can cause for generations to come should not be overlooked. With wide-scale implementation of A.I. for facial recognition, the consequences of bias are disastrous. IBM seems to agree with this statement. We are releasing this guide today because the role of the ethics board is to ensure a diverse workforce, a diverse customer feedback loop and unbiased training datasets.
I built an ethics program at an A.I. company. I know that it takes a foundation of diversity, equality and inclusion to prevent systemic bias in the future. An ethics program builds that foundation for A.I. But the program needs adoption from the very top. That’s why DE&I has always been, and will continue to be, a core value at ClearOPS. Take note, building an ethics program takes time and intention. If our small startup can build with intention, then so can yours. Our hope is that this guide is adopted at every A.I. driven company out there because change can only happen when there is action. Be the change.
All my best, Caroline.
Caroline McCaffery is the CEO & Co-Founder of ClearOPS, Inc., a B2B SaaS data privacy and cybersecurity company launched in October 2017. ClearOPs is built on restoring your trust in business, people and systems.