AI’s Black Box Problem
This week, the owners of The New Stack, venture capital investment firm Insight Partners, held a conference in New York exploring the future of artificial intelligence. The good news that came from ScaleUp:AI is that AI is providing a lot of value to organizations. But like any valuable new technology, there are engineering tradeoffs to consider. Any successful machine learning operation comes with a set of new challenges.
At the show, PayPal Data Science Senior Director Janice Tse revealed that PayPal has cut its fraud rate by 50% thanks in part to an internal AI fraud detection system. “An algorithm can learn a lot more patterns,” she said. “Fraud is so dynamic. It’s changing all the time.” Nicholas Warner, CEO of SentinelOne, also made a compelling case that enterprise security protection will increasingly rely on AI, as malicious attackers are already turning to AI to automate and customize their transgressions.
However, the conference also showed that accountability is proving to be a factor in AI operations as well. It’s not good enough for AI to come with answers: it needs to show its work as well. The health care industry can’t use “black box” AI solutions, said Humana’s Heather Carroll Cox. The industry needs providers who can provide documentation on how results were produced, for governance and risk management.
She was concerned about how AI systems may embed the cultural biases of its creators. Banking giant Wells Fargo, for instance, has come under fire for allegedly discriminatory algorithms.
In another panel, Jared Dunnmon of the U.S. Defense Department’s Defense Innovation Unit, agreed with this sentiment. As the stakes increase for AI systems, he argued, it is those companies that have thought through the AI ethics that will have a competitive edge. Functional AI is responsible AI, he said.
|