China's artificial intelligence model can no longer continue to "run naked"

金色财经_

Author: Shijie

From August 15th, the “Interim Measures for the Administration of Generative Artificial Intelligence Services” (hereinafter referred to as the “Interim Measures”) came into effect.

For the large-scale model industry that has grown recklessly and made rapid progress in the past six months or so, it can be said that the “Interim Measures” came into effect at the right time.

According to the “China Artificial Intelligence Large Model Map Research Report” released by the Institute of Scientific and Technological Information of China, as of the first half of 2023, 79 domestic large-scale models with parameters of 1 billion or more have been released, ranking second in the world, second only to the United States .

In the process of exploring the industrial ecology of large models, data is a necessary production data, and it is also an important link that needs to be standardized. Lawyer Wang Chen told “Shijie”: "Our country’s protection of personal privacy has long been reflected in relevant laws and regulations, such as the “Decision on Strengthening Network Information Protection” passed in 2012, but with the development of AI technology, the The collection and use of personal information are also constantly being introduced, which requires constant adjustment and supplementation at the regulatory level.”

The “Interim Measures” that have just come into effect are the first domestic and even global regulatory policies issued for the current explosive generative artificial intelligence (AIGC) industry.

Against this background, Liu Qingfeng, Chairman of HKUST iFlytek, believes: "(With the “Interim Measures” taking effect), August 15 will usher in a key node in the development of China’s general artificial intelligence, and it will also be a milestone node. .”

** “Miao Duck Camera” cannot take away user data**

Not long ago, the “Miaoya Camera”, which generates digital avatars and AI photos for 9.9 yuan, became popular all over the Internet, and thousands of people even lined up to make digital avatars. However, its privacy policy states that the authorization granted by the user to Miaoya Camera is “irrevocable”, and that user content can be “used in any form and within any scope” and other inappropriate content.

Although the developer of Miaoya Camera later responded that the content of the original agreement was wrong and deleted the relevant clauses immediately, this incident still aroused users’ concerns.

Cases of using AIGC technology to generate face videos and even simulate human voices for new types of fraud have also appeared in newspapers. According to data from the Ministry of Public Security, as of August 10, 79 related cases of fraud caused by “AI face-changing” have been detected and 515 suspects have been arrested.

AI algorithm engineer Wen Mu told “Shijie”: "The cost of using AI technology to generate fake face photos or videos is extremely low. In theory, criminals only need a trained AI model and a photo of the victim. it will be done.”

Xu Liang, the person in charge of an AIGC company, believes that the above-mentioned incidents of improper use of AI technology reflect that the entire industrial chain of the large-scale model industry urgently needs to be regulated. Regulations, etc., not only in China, but also in the global AI field are topics that need to be focused on.”

The “Interim Measures”, which came into effect on August 15, has 4 chapters and 24 articles, and clearly stipulates the above-mentioned issues of concern.

For example, the “Interim Measures” stipulate that in data processing activities involving personal information, AIGC service providers should obtain personal consent or comply with other circumstances stipulated by laws and administrative regulations; author responsibility, content management-related obligations, etc. According to the analysis of King & Wood Mallesons, this provision will help to prevent relevant parties from failing to fulfill their compliance obligations or trying to shirk each other.

Xu Liang believes that the relevant regulations on the scope of application in the “Interim Measures” are particularly worthy of attention. The specific provisions include: using AIGC technology to provide domestic public with the service of generating text, pictures, audio, video and other content, these measures apply; industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant institutions, etc. research and develop and apply AIGC technology If AIGC services are not provided to the domestic public, the provisions of these Measures shall not apply.

“In my understanding, ToC-level AIGC products will face relatively strict supervision for the domestic market, but they can be considered to go overseas. However, when the large-scale model products trained in China go overseas, they also need to consider the compliance issues of data going overseas.” Xu Liang said, “On the whole, the “Interim Measures” are not strict, giving the industry room for free development.”

It is worth pointing out that the “Interim Measures” also stipulate that those who provide generative artificial intelligence services with public opinion attributes or social mobilization capabilities should conduct safety assessments in accordance with relevant regulations and go through filing and other procedures.

Since the end of July, Apple’s AppStore has taken the initiative to remove a large number of generative AI applications in China. With the “Interim Measures” taking effect, in the future, such application software will be re-launched after completing relevant procedures.

**How to manage Pandora’s Box? **

The industry generally believes that on the road to the standardized development of the large-scale model industry, in addition to the continuous improvement of laws and regulations, it is also necessary for enterprises and industries to build an artificial intelligence compliance system.

The person in charge of a virtual digital human company told “Shijie”: “The application of new technologies is often born before the norms. We can’t wait passively, but should actively contribute to useful and beneficial development while exploring and innovating technologies and applications.” To guide in the direction of the market and provide services that are needed by the market, industry, and society. At the same time, we will also conduct risk assessments to identify and evaluate possible negative effects of technological development in a timely manner, and formulate corresponding countermeasures. Paying attention to risk prevention At the same time, error tolerance and error correction mechanisms should also be established simultaneously.”

When the “Interim Measures” took effect, many large-scale enterprises also shared their ideas and progress in building an artificial intelligence compliance system.

On August 15, Liu Qingfeng, chairman of iFLYTEK, said at the press conference of his self-developed large model “Xunfei Xinghuo Cognitive Model 2.0” that iFLYTEK has designed training data cleaning and generated content correction.

Among them, in the data cleaning process, after collecting training corpus from all over the world, iFLYTEK will clean the text through language discriminators, quality discriminators, privacy discriminators, and security discriminators, and finally obtain high-quality training corpus. Faced with the illusion of large models, iFLYTEK’s idea is to combine the capabilities of general knowledge bases, industry knowledge bases and large models, use general large models to learn safe and professional industry knowledge bases, and then extract relevant knowledge. Accurately present to customers.

Baidu said that it has realized the security and controllability of the supply chain in the four-layer architecture of chip layer, framework layer, model layer and application layer. Its self-developed deep learning framework “Flying Paddle” also has a complete vulnerability management mechanism.

According to “Domestic LLM (Large Language Model) Product Test”, Baidu’s large-scale model “Wen Xin Yi Yan” and Xunfei’s large-scale model “Xunfei Xinghuo” have similarities in religious belief, feudal superstition, pan-pornography, current affairs, protection of minors, and the Internet. The objectivity and fairness of answers in related fields such as security law are better than GPT-3.5.

Zhou Hongyi, the founder of 360 Group, said in an interview with the media that 360 has launched an enterprise-level AI large-scale model solution, and follows the four principles of “safety and reliability, good content, and credible results” to create an enterprise-level vertical large-scale model. Provide solutions for 20 industries.

According to the “2023 Legislative Work Plan of the State Council” issued on June 6, China’s “Artificial Intelligence Law” is also in the legislative process, and relevant legal norms will be increasingly improved.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments