Aspirin was prescribed as a pain reliever for nearly 100 years before researchers were able to explain “the precise chemical mechanism of how aspirin stops pain and inflammation.” Harvard’s Jonathan Zittrain says this “answers first, explanations later” approach to discovery results in “intellectual debt,” and it’s not limited to drug development. Across medicine — and now, artificial intelligence and machine learning at large — we’re advancing technologies and solutions without a full understanding of how and why they work.
For the most part, we enjoy modernity and benefit greatly from massive breakthroughs in health, science, and technology. Most would agree that the 20th century was better with aspirin than without it, and AI offers irresistible advantages to many. However, intellectual debt has consequences, and algorithmic bias is one of the more problematic outcomes.
When it comes to health, algorithmic bias can be particularly dangerous. In theory, artificial intelligence and machine learning can either fulfill a promise to democratize healthcare or exacerbate inequality; in reality, both things are happening. At best, people are excluded. At worst, people die.
In our latest Problem Spotlight, we look at algorithmic bias in health — what it is, why it matters, and how to pursue wild innovation while mitigating risk. Read more.
|
|