With the development and growing popularity of large language models, various concerns have been raised. These concerns have included the quality of the information generated, the fairness of how individuals and groups are represented in and impacted by such models, and the safety of the information presented in response to questions posed. Because many of the models are being developed in the US by private companies, there is an additional concern that the general public has little visibility as to how the models are developed and operate.
The situation in China provides an interesting contrast to the US. Writing in the decoder, Matthias Bastian explains that the Cyberspace Administration of China (CAC) is requiring Chinese companies developing AI models to submit them for government testing. The testing examines how the models handle politically sensitive information, information on President Xi Jinping, and events such as the 1989 Tiananmen Square protests. Models need to be adjusted on an ongoing basis to meet government requirements.
Approaches to the governance of AI models are in early stages of development, but it seems likely that there will be continuing pressures to understand and manage the information such models contain and present. This represents a new stage of information policy, but in many respects it is a continuation of longstanding efforts to control information that have been with us at least since the invention of the printing press.
Be the first to comment