An article in PC Magazine provides a good discussion of how algorithms used in health care AI systems can be biased against certain populations. In addition to several clear examples of bias with deadly consequences for patients, the article explains just how such bias creeps into systems.
Beyond documenting the problems with healthcare AI systems, the discussion suggests ways to address the problems, notably through the use of greater transparency. When AI models cannot be explained and evaluated the opportunities for biased processes to be set in motion are greater. Standards for AI systems in a range of sectors need to be developed, and they need to be monitored to make sure they are observed in practice.
Be the first to comment