Science

ChatGPT: A profitable spin on OpenAI, a lab that wanted to “protect humanity”

How many people believed in OpenAI that day? We are in May 2019, during the recording of the StrictlyVC show dedicated to Silicon Valley. In front of insiders, Sam Altman, the president of the company, is getting confused. Still unknown, her structure, which wants to create the first AI as intelligent as a human, has just gone “for profit”. Turning point. But for now, OpenAI has “never been profitable” and “doesn’t know how to do it,” Altman admits. He adds: “We’ll ask our AI.” His listeners laugh. Was he serious?

Three and a half years later, OpenAI machines have not reached that level. But the company has managed to develop an intelligent system capable of writing concise but compelling speeches, presentations, or poetry. In free testing since late 2022, ChatGPT has already gained several million users and is set to integrate Bing, Microsoft’s search engine. Its image equivalent, the Dall-E 2, has a million customers waiting on a waiting list. Subscriptions to these tools should bring in a billion dollars in 2024, according to an organization that wanted to be… a charity from the beginning.

In its seven years of existence, the American startup has carved its rightful place in the small world of artificial intelligence—until it demands a $29 billion valuation if a new announced fundraiser materializes. Creating ambiguity about its raison d’être, in a sector criss-crossed by important security issues, about the biases and capabilities of such technologies. Created as a research lab to “protect humanity” from the “misuse of AI,” the organization has slowly abandoned its original promises of being non-profit, publishing its source code, and being financially independent.

David vs. Gatham

In 2015, OpenAI founders Sam Altman and Elon Musk brandished the organization’s “non-profit” status as an ethical guarantee for their insane ambition to compete with the human brain. “Because our research does not involve financial commitment, we can better focus on positive human impact,” the respective Y Combinator and Tesla executives assured in the opening post. And I insist: in the face of the potential dangers of AI, “it is important to have a leading research institution capable of advancing the collective interests. […]”.

This promise allows OpenAI to secure the support of well-known investors such as Reid Hoffman and Peter Thiel, the respective founders of LinkedIn and PayPal. A billion dollars are poured in, a tidy sum for an organization that has to compete with digital giants from scratch. Fifteen of the most brilliant scientists in the field, such as Ilya Sutskever, a former Googler and pioneer of these technologies, also signed up, attracted by the ambitions of a structure that did not hesitate to pay them high salaries despite the lack of benefits.

OpenAI takes other security measures to prevent the creation of dangerous AI. The department is dedicated to security and ethics research. And the organization promises to make its work public. If Google and Facebook are still revealing their source codes, “will they do so if they are close to surpassing human intelligence,” Altman asked at the launch of OpenAI. “OpenAI is responding to emerging concerns about AI with the idea that if the tools developed are available to everyone, the risks will be lower,” summarizes Jan Chevalier, director of artificial intelligence research at Paris Dauphine.

Railings … on clay legs

In February 2019, OpenAI drastically violated this policy. In a blog post, the research entity explains that it has just created a technology that is too dangerous to publish, called GPT-2, a previous version of the technology that allows ChatGPT to be so realistic in the use of the language. The company says it fears an avalanche of “misleading press articles” and “content.” […] rigged for social media”… but increases press access at the same time. Clarity exercise or trick com’? OpenAI will eventually publish the details of this code. A shift… is in sight? A few weeks after Elon Musk’s departure, OpenAI adopts a new charter of principles in which the organization is already planning to reduce the proportion of source code being commodified due to “security concerns” associated with improving its technology also explains the “need to mobilize significant resources” in the face of computing power that it estimates is doubling every 3 or 4 months in this sector.

One month after a GPT-2 episode, the transition accelerates. In order to raise more funds, OpenAI is becoming a “limited profit enterprise”. At the same time, she wins a match against world champions in Dota 2, a cooperative video game with complex and intricate computer strategies. For the first time in the world. In July 2019, Microsoft invests a billion and tries to use its tools as a priority. Instead, OpenAI can use remote Microsoft calculators. Structure avoids redemption but says goodbye to philanthropy.

How to become the “number one company”

Would OpenAI want to keep the goose that lays the golden eggs? The timing is amazing. “After several years of trial and error, OpenAI has started to release really interesting designs. At the same time, they stopped sharing their production secrets,” emphasizes Jan Chevalier. On this occasion, Altman assured back in May 2019: “If we wanted to make money, we would have already done it.” At the same time, hinting through other executives that OpenAI could become the “number one company in the world” and receive “unprecedented” profits if it gets its way.

A useful dual discourse, notes Jean-Gabriel Ganasika, president of the CNRS ethics committee and computer scientist. “The leaders of OpenAI realized that in order to receive funding and hire at a very high level in the face of Gafam, it was necessary to make sensational statements. Developing AI to protect humanity from AI means nothing in practice, we don’t really know the reality of the OpenAI project, but it’s attractive. OpenAI is scary and at the same time says it can protect.”

“OpenAI products are not that different from other market players,” adds François Yvon, Digital Science Researcher (Saclay, CNRS). “His strength lay primarily in understanding the scope of text and image conversion tools. They could help create reports, extract information from large databases, or write legal summaries, among other things. The technology existed, but they were the first to imagine these applications.”

December 2022 After the success of ChatGPT, OpenAI tries to recruit with a promotional video. Judging by the video, the defense of humanity will rest there, in these gray brick buildings in downtown San Francisco. Between houseplants, in front of large wooden bookcases, or maybe on those soft linen sofas that we see almost everywhere in Silicon Valley. Here they would make “powerful AI”, “for the benefit of all”, “completely safe”. At the same time, Elon Musk tweeted: “ChatGPT is terribly efficient. We are approaching dangerously intelligent AI.” Wink.

Since Microsoft’s investment in San Francisco, several heads have gone missing. According to the Financial Times, at least 14 scientists have left the company. Some were involved in the birth of ChatGPT. At the head of the slingers, Dario Amodei, former head of the Artificial Intelligence and Security department, created his own organization: Anthropic. “Public benefit corporation” with special control mechanisms to protect against commercial interference. And to guarantee AI… “for the benefit of humanity.”

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.