
Cryptocurrency is libertarian, AI is communist. Well-known investor Peter Thiel said this in 2018 during a debate with LinkedIn founder Reid Hoffman at Stanford University. The announcement at the time was specifically aimed at artificial intelligence, an area in which Chinese progress seemed insurmountable. In 2019, China overtook the US and Japan in the number of artificial intelligence-related patent applications worldwide.
Peter Thiel’s comment basically reflected the fact that AI training requires resources—computing power and datasets—that are available only to centralized organizations. The high-profile launch of ChatGPT, as well as image-generating artificial intelligence and large language models, shattered that idea. Everything on the web is a sufficient base dataset to train very convincing models.
Despite being heavily resourced, with over $250 million in funding, nearly $1 billion in Microsoft Azure cloud infrastructure, and thousands of people manually preparing the algorithm to improve it, OpenAI remains accessible to many organizations. Add to this the emergence of collaborative AI research sites like Hugging Face and the connection established between major authoritarian states, and the technology collapses. In passing, it should be noted that due to the interdependence and lack of transparency of platforms, the libertarian myth of cryptocurrency has suffered greatly.
The growing availability of artificial intelligence technologies cannot but cause a reaction from states. In the United States, in late January, the National Institute of Standards and Technology (Nist) published a voluntary framework that provides guidance to companies on how to use, design, or deploy artificial intelligence systems. The European Union, as usual, has decided to be stricter with the bill regulating the use of artificial intelligence, which is due to be voted on in the spring (see 21 October 2021 column). But, perhaps, the Chinese state has reacted most strongly to this progress.
Beijing requires AI to sign their work
Beijing’s Internet regulator, the Cyberspace Administration of China, released rules in early December on what it calls “deep fusion” technology, covering software for creating images, sounds and text using artificial intelligence. In particular, the use of this technology to create content that could undermine the economy or national security is prohibited, which unites concerns about deepfakes, this content is mass-produced and plausible enough on the surface. Beijing’s new rules require user-visible labeling of AI-generated content, along with their digital watermark.
But the Chinese government is also trying to keep up with essentially American initiatives, whose training games are biased and less detailed in Chinese culture. To do this, he is counting on his increasingly controlled private companies, as well as his research institutes. As widely reported in state media, Chinese Internet search engine Baidu has announced that it will launch an artificial intelligence-based chatbot service similar to OpenAI’s ChatGPT in March. In recent years, Baidu has invested heavily in its Ernie-ViLG model, which consists of over 10 billion parameters and is trained on a dataset of 145 million Chinese image-text pairs. He also uses it in autonomous driving.
Another model that is blowing its sails in China is Taiyi from IDEA, a research lab run by renowned computer scientist Harry Shum, formerly of Microsoft. The open source model is trained on over 20 million filtered Chinese image-text pairs and has 1 billion parameters. Close to the Beijing Academy of Artificial Intelligence and the Shenzhen local government, IDEA is likely to have more research freedom.