current location:news > news > text
What impact does the first "method + strong standards" combination of artificial intelligence in my country have brought? Experts interpret hot issues↓
2025-05-08 source:CCTV.com

CCTV News: The "Measures for the Generation of Synthetic Content of Artificial Intelligence" will be officially implemented from September 1 this year. The "Identification Measures" proposes standard requirements such as forced addition of explicit and implicit identification. All text, pictures, videos and other content generated by AI must "disclose their identities", and they can no longer "disguise the real with falsehoods". So, who will add the content identifiers generated by AI? How to add it? Can "marketing accounts" that use AI tools to spread false information have a curb effect? Regarding these issues that netizens are more concerned about, let’s listen to the experts’ interpretations.

According to Zhang Jiyu, executive director of the Future Rule of Law Institute of Renmin University of China, the "Logination Measures" clarify the responsibilities and obligations of relevant entities, including the service providers need to ensure the complete logo in content generation, dissemination, downloading and other aspects. Internet application distribution platforms need to review whether the application is compliant to add logos. Individual users need to actively declare when publishing generated content.

Zhang Jiyu, executive director of the Institute of Future Rule of Law of Renmin University of China, said: "The Measures have made specific details on such implicit identification, especially stating that under the current technical circumstances, at least the metadata of this file should be identified, that is, the data described in the file cannot be seen by the user. In this way, the implicit identification will not affect the user's use and play the role of identifying important information. For some important contents such as those that may cause public confusion or even more important misunderstandings, explicit identification needs to be added."

According to Zhang Jiyu, when individual users use AI to create text, if there is no infringement on the interests of individuals, enterprises, society and other subjects, the user does not need to make special marks. However, if the text content generated by AI involves imitating someone's discourse, imitating the media for news releases, or imitating government departments to issue relevant announcements, etc., these contents that are prone to misunderstandings and confusing objective facts must be explicitly identified.

Zhang Jiyu said: "At present, artificial intelligence technology is still developing, and the corresponding risk detection and prevention technology is still developing. So at present, we cannot say that it is an ultimate method. We can only say that it is actually a relatively good method taken under the current technical conditions, which takes into account the balance between development and application and security risk prevention and control."

The "Artificial Intelligence Generation Synthetic Content Identification Measures" will come into effect on September 1. What impact will it have on industry and individual users?

According to the reporter, as of December 31, 2024, 302 generative artificial intelligence services in my country have been registered with the National Cyberspace Administration of China, and the user scale of generative artificial intelligence products has reached 230 million. So, what impact will the implementation of the "labeling method" have on the development of industries, individual users and related technologies? Why does it take half a year to implement it? Let's continue to listen to the experts' interpretation.

According to experts, under the requirements of the "Identification Method", the existence of the logo will also intercept the behavior of large-scale transfer of artificial intelligence to directly generate content, giving the content platform clearer compliance requirements and management procedures, which is conducive to reducing the overall audit operation cost.

Zhang Jiyu said: "The authenticity of the content of the entire cyberspace and digital space will be promoted, and it will also enable self-media to use artificial intelligence technology to make videos more outstanding, but do not use them to create some false news to attract attention, so it is to guide the majority of self-media to use artificial intelligence technology more actively."

Experts introduced that the implementation of the "marking method" is conducive to fair competition and effective management. First of all, forced "marking" is conducive to protecting traditional content industries and can reduce the impact of AI technology to a certain extent. In addition, the source of AI-generated content can be traced through identification, reducing infringement, fraud and other problems caused by AI-generated content, and maintaining personal privacy and property security.

According to the reporter, as the first "combination punch" of "methods + strong standards" in the field of artificial intelligence in my country, the mandatory national standard "Cybersecurity Technology Artificial Intelligence Generation Synthetic Content Identification Method" will be implemented simultaneously with the "Artificial Intelligence Generation Synthetic Content Identification Measures" on September 1.

Zhang Jiyu said: "Because we need to perform significant identification and some implicit identification in the content generator, we also need to provide users with significant identification. At the same time, because the method also requires the communication platform to deploy corresponding detection of identification, the deployment of such technical measures will take some time to prepare."

Unified Service Email:chinanewsonline@yeah.net
Copyright@ www.china-news-online.com