Building Ethics Into Your AI Strategy

 | 

Tay, the AI-powered chatbot from Microsoft, had to be turned off within 24 hours of its debut when it turned sexist, racist and offensive. Developed to conduct research on conversational understanding, Tay was supposed to learn from the messages she received and gradually improve her ability to conduct engaging conversations. Unfortunately, the training data that she received from the Twitter community sent her off track. Whether it was proclaiming its affinity towards Hitler or tweeting out hateful comments against Jews and feminists, Tay was out of control.

IBM’s Watson started swearing after learning the Urban Dictionary, Nikon S360 camera missed the mark on racial sensitivity and computer assessment at a federal prison put a black woman at a higher probability of committing a future crime compared to hardened criminals. Tomorrow, when self-driving cars become mainstream, will the technology powering it put the passengers at risk to save the pedestrian who emerges from nowhere?

Clearly, releasing AI into the real world where the unpredictability of the environment may reveal a software error can have pretty disastrous consequences. Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, notes in his book Superintelligence that it may not always be easy to turn off an intelligent machine. So the questions is: Are you training your machine learning algorithms on biased data? Stop. Think. Hit Refresh.

Rumman Chowdhury, Senior Principal and Global Lead for Responsible AI at Accenture outlined the “Five Principles of Human Centric AI” in designing ethical AI solutions at an AI Summit in San Francisco, earlier this year. They are:

Enable Enhanced Judgement

AI should help people identify and address biases, not invisibly alter their behavior to reflect the desired outcome.

Collaborate, Not Challenge

Real-time interactivity with AI mimics how humans grow and learn. We should design to co-create instead of correcting.

Human + AI

Humans and AI have complimentary skills. Human-centric AI capitalizes on critical thinking skills that humans excel at and combines that with massive computational skills of AI.

Create Diverse and Inclusive Teams

Diversity and inclusion takes many forms – racial, gender, academic, geographic, to name a few. True human centric AI takes into account the vast differences of humans who will be impacted by the AI by inviting these perspectives into the design and development process.

Align Machine Intelligence with Human Values

AI is better at making decisions and that doesn’t necessarily imply that it makes better decisions. What are the values that should be encompassed in your product? How might these values vary across different demographics?

These questions need to be taken into consideration while designing AI solutions. Responsible AI should be developed to incorporate core human values and sensitivities.

McGill’s Prestige Scholar and AI Ethics Researcher, Abhishek Gupta, advocates for AI products to be “ethical-by-design” just like cyber security teams at leading companies across the world ensure that their products are “secure-by-design”.

Most experts call for collecting data from diverse set of users to make sure that there is adequate representation in the backend dataset. Diversity in speech, culture, race, gender… and even user interfaces should all be taken into consideration while developing a product that will employ AI at its core. 

Jane Nemcova, VP & GM of Global Services for Machine Intelligence at Lionbridge, believes that the government should get involved and that there’s a pressing need for a fundamental shift in how companies approach the subject of ethics in AI. It should be built into the daily life of a company by educating everyone and making it a habit.

As mathematical models take over our daily lives and determine even the food that we eat through restaurant recommendations, and as machine learning algorithms become all pervasive and form the building blocks of the products of the present and the future, we need to pause for a moment and ask ourselves: How biased is my training data? What checks do I have in place? Am I building ethics into my AI strategy?