Google opens experimental chatbot for public testing

Image: Google

Google has opened up its AI Test Kitchen mobile app to give everyone a limited hands-on experience with their latest AI breakthroughs, like the LaMDA Conversational Model (Language Model for Conversational Applications).

Google announced the AI ​​Test Kitchen in May along with version 2 of LaMDA and is now allowing the public to test parts of what they believe will be the future of human-computer interaction.

The AI ​​Test Kitchen “is designed to give you an idea of ​​what it’s like to have LaMDA in your hands,” Google CEO Sunday Pichai said at the time.

It will initially be available to small groups in the United States. The Android app is available now, while the iOS app is expected “in the coming weeks.”

Warning: inappropriate content may be filtered

Upon registration, the user must agree to certain terms, including “I will not include personal information about myself or others in my interactions with these demos.”

Like Meta, which recently publicly unveiled its BlenderBot 3 AI chatbot model, Google also warns that its early versions of LaMDA “may display inaccurate or inappropriate content.” Meta warned when opening BlenderBot 3 that a chatbot could “forget” it’s a robot and “say things we’re not proud of.”

Both companies acknowledge that their AI can sometimes be seen as politically incorrect, as Microsoft’s Tay chatbot did in 2016 after being fed nasty comments by the public. As with Meta, Google claims that LaMDA has undergone “key security improvements” to prevent inaccurate and offensive responses.

But unlike Meta, Google seems to be taking a more restrictive approach, placing limits on how the public can communicate with it. So far, Google has only shown LaMDA to Google employees. Opening it up to the public may allow Google to speed up the pace of improving the quality of responses.

Dialogue simulation

Google releases AI Test Kitchen as a set of demos. The first, “Imagine This,” lets you name a place, after which the AI ​​suggests ways for you to “explore your imagination.”

The second demo, “List it”, lets you “share a goal or topic”, which LaMDA then tries to break down into a list of useful subtasks.

The third demo, “Talk about it (Dogs edition)”, seems to be the most free test, although limited to dog questions: “You can have a fun and open talk about dogs, and only dogs, which explores LaMDA’s ability to stay on topic, even if you’re trying to deviate from it,” explains Google.

LaMDA and BlenderBot 3 achieve the best performance in language models that simulate the dialogue between a computer and a human.

LaMDA is a large language model with 137 billion parameters, while Meta’s BlenderBot 3 is a “dialog model with 175 billion parameters capable of communicating in an open domain with Internet access and large memory.”

Google’s internal testing was aimed at improving the security of the AI. Google says it has cross-tested to find new flaws in the model and has hired a “red team” made up of attack experts, who Tris Varkentin of Google Research and Josh Woodward of Google Labs said.

Double-edged public exposure

While Google wants to keep its AI safe and prevent it from saying shameful things, Google could also benefit by sending it into the wild to experience human speech, which it can’t predict. Quite a dilemma.

Google points out several limitations that Microsoft’s Tay faced when she was unveiled to the public. “The model can misunderstand the intent behind the identification terms and sometimes fails to respond when used because it struggles to distinguish between innocuous and unfavorable cues. It can also give harmful or toxic responses based on bias in its training data, generating responses that stereotype and misrepresent people based on their gender or cultural background. These and other areas are being actively explored,” say Tris Warkentin and Josh Woodward.

Google says the added safeguards have made its AI safer, but haven’t eliminated the risks. The defenses include filtering out words or phrases that violate its rules, which “prohibit users from knowingly creating content that is sexually explicit, hateful or offensive, violent, dangerous or illegal, or that discloses personal information. »

On the other hand, users shouldn’t expect Google to remove everything they said when they created the LaMDA demo.

“I will be able to delete my data while using a particular demo, but once I leave the demo, my data will be stored in such a way that Google cannot determine who provided it and will no longer be able to respond to any deletion request,” it says. in the form of consent.

Source: .com

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.