At last week’s Strata Conference the buzzword exhibiting the highest frequency count appeared to be “Explainable” as prepended to “Artificial Intelligence”. We have collectively transcended “can we make it work?” and landed squarely in “why did it make that decision?” territory.
In highly regulated industries the government applies a strong back pressure on non-explainable algorithmic decisions. This serves as a check against runaway and impenetrable automation of decision making. Yet clearly not all AI-driven industries that can exert an enormous impact on our lives find themselves subject to such controlling forces. And from one country to another the degree of regulation for a given industry can vary greatly.
The UAE’s Daman gave an interesting talk on how they applied Natural Language Processing techniques to non-textual data in the healthcare claims adjudication space. The strategy appeared to enjoy substantial and measurable success. What creeped me out, though, was their seeming heavy reliance on customer complaints to act as the corrective force on falsely flagging claims as invalid. The presenter offered the opinion that if a customer did not fight a claim rejection then the claim was probably invalid or unimportant anyway.
This feels like data scientists engaging in cost externalization to customers who exist in a fairly disadvantaged position and who must now fight back against a maddeningly opaque decision engine. This appeared especially so in the case of Daman who apparently controls 80% of the health care market in the UAE (cited by one of the presenters as a reason why this particular data set was super cool to work on).
What force would stop such a company from taking the next logical step in profit optimization? Auto-tune the rejection of valid claims to the sweet spot where statistically customers don’t fight it because getting their due does not justify the cost.
There has been much talk of how we must not allow the “Kill Decision” to fall into the hands of robots in warfare. How easy it would be to make the same mistake in less sensational contexts.