Taking the Risk Out of Machine Learning and AI

Machine learning and artificial intelligence are integral components of any modern organization’s IT stack but these data-harvesting tools can have a dark side if appropriate risk management and planning protocols aren’t in place.

There’s no denying the power and possibilities created by AI and machine learning. With this astounding power to build models designed to improve the efficiency and performance of everything from marketing and supply chain to sales and human resources comes considerable responsibility.

A recent McKinsey report sheds some light on how companies in every industry should be wary of assuming that these relatively new and remarkably complex tools will always deliver the desired outcome as they’re integrated with other applications and processes.

These tools are just like every other tool that’s ever existed: they’re only as good as the people designing and using them. If they’re rolled out as undercooked products or left to their own devices without constant review, things can get ugly in a hurry.

This spring, the Department of Housing and Urban Development filed a lawsuit against Facebook, alleging the social media giant violated the Fair Housing Act by “encouraging, enabling and causing housing discrimination.”

The allegation was directed at the company’s inability to prevent companies and individuals using Facebook to improperly shield who can and cannot see advertisements for apartment and home rentals.

Facebook is a technology company that’s mastered the use of data-mining, AI and machine learning tools to build an empire. Whether by design or neglect, the government is asserting, essentially, that Facebook has lost control of its technology and it is hurting real people every single day.

HUD claims Facebook allowed advertisers to use tools on the platform that could exclude people who were classified by the algorithm as “non-American born” or “interested in Hispanic culture” among other highly discriminatory filters. It also let advertisers hide their properties from people based on their ZIP code – another dated and yet still highly effective way to screen out minorities who might have otherwise applied to rent these properties.

We’ve seen similar examples of AI run amok with YouTube – the company that garners a mind-blowing 37% of total mobile web traffic worldwide. To help put that figure in perspective, consider that Instagram accounts for just 5.7% of mobile web traffic and simple, generic “web browsing” checks in at just 4.6%.

Establishing and continuing to grow this leadership position as the world’s dominant video platform is largely the result of the company’s vaunted AI algorithms. When Google Brain, the parent company’s AI unit, took responsibility in 2015 for the “recommended” video teases that follow and reside next to any video on the platform, suddenly kids searching and watching benign and age-appropriate videos were prompted to click on other videos that were far afield and much more problematic.

This conundrum – the AI identifying and feeding videos that were most watched or most engaging even though they were definitively not in the realm of what viewers initially set out to watch – is a byproduct of unchecked technology doing what technology does.

The difference, at least in the cases of Facebook and Google, is that those companies have the influence, power and cash to ride out their AI and machine learning missteps. The same may not be the case for small and midsized retailers or even a regional bank.

Forget the potential fines and sanctions. If you’re not managing the risk of your AI and machine learning endeavors, you could be accidentally missing some of your future customers because the AI model was flawed from inception.

In some industries, like banking, management has decided to take a more circumspect approach to implementing AI and machine-learning models. Rather than jumping headfirst into the deep end, they’re rolling out these tools for things like digital marketing and human resources rather than mainlining AI to processes directly impacting investments and account management.

“Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering and model life-cycle controls,” the McKinsey report concluded.

As more and more companies tiptoe their way into the promised land of AI and machine learning, it’s crucial that the IT tools and models generated by the technologists are informed by the sensibilities of executives from business, legal and marketing departments.

It also helps to hire and retain employees throughout your organization – not just IT – who understand and are familiar with how these AI and machine-learning applications work. It doesn’t mean they have to explain how the technology works at a granular level but that they have an appreciation for how dependent these tools are on building inclusive models from the get-go.

“If you train an algorithm with data that has underlying sexist or racist data, you may end up making a racist or sexist machine learning algorithm,” Alex LaPlante, managing director of research at the Toronto-based Global Risk Institute, said during a panel discussion for insurance professionals.

“Sometimes a machine learning algorithm is the right way to go and maybe sometimes it’s not,” she added. “I think we need to step back before we all jump on the bandwagon.”