Because the dominant contemporary approaches to artificial intelligence take an inductive approach, they represent less of a radical departure in human thought and more of an acceleration or super charging of it. Thus, it should not be surprising that currents of human thought that may be less desirable surface in a variety of AI applications.
One example comes from the health care sector where cautions are being raised about the perpetuation of racist assumptions that could lead to the worsening of health disparities for Black patients. Writing for the Associated Press, Garance Burke and Matt O’Brien report on a recent study from the Stanford School of Medicine that found that four common AI systems (ChatGPT, ChatGPT4, Bard, and Claude) reinforced false beliefs that could lead to poor clinical decisions regarding Black patients.
Burke and O-Brien go on to discuss efforts that are underway to improve AI in the medical sector. One initiative at the Mayo Clinic is testing Google’s medicine-specific model (Med-PaLM) which is trained on medical literature. Additional independent testing and refinement of AI models may lead to next generation systems that reduce or eliminate the bias found in current commonly available systems.