China’s cyberspace regulator, the Cyberspace Administration of China (CAC) has unveiled draft measures aimed at managing generative artificial intelligence (AI) services. Under the proposed rules, companies must submit security assessments to authorities before launching their generative AI offerings to the public.
This move comes as various governments around the world are seeking ways to mitigate the risks posed by the emerging technology. The popularity and investment in generative AI have surged in recent months, partly due to the release of OpenAI’s ChatGPT.
Chinese tech giants, including Baidu, SenseTime, and Alibaba, have showcased their new AI models, which power applications ranging from chatbots to image generators. While China supports AI innovation and application, the CAC stresses that content generated by generative AI must align with the country’s core socialist values and encourage the use of safe and reliable software, tools, and data resources.
Providers will be held accountable for the legitimacy of the data used to train generative AI products. They must also prevent discrimination when designing algorithms and training data. Service providers must require users to submit their real identities and related information, and face fines, service suspensions, or even criminal investigations if they fail to comply with the rules.
Furthermore, if inappropriate content is generated by their platforms, companies must update their technology within three months to prevent similar content from being generated again. The public can comment on the proposals until May 10, and the measures are expected to come into effect sometime this year, according to the draft rules.
The sources for this piece include an article in Reuters.