Meta's new AI chatbot has a LOT of opinions

(Crew Interactive MObile companioN)

When it comes to Artificial Intelligence and chatbots, Google’s LaMDA system has been driving most of the headlines recently. But Meta (formerly Facebook) clearly doesn’t want to be left out of the action. They recently launched a Large Language Model chatbot of their own called BlenderBot 3. And unlike Google and other Big Tech chatbot prototypes that only allow a limited number of beta users to engage with it, Meta has put BlenderBot 3 up on the web for anyone to talk to and help them test it out. (You can chat with the bot here.) They do warn potential chat partners that the bot may provide responses that some may find offensive or offer incorrect information. I can tell you from personal experience that both of those warnings are valid. (The Verge)

Advertisement

Meta’s AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system in order to collect feedback on its capabilities.

The bot is called BlenderBot 3 and can be accessed on the web. (Though, right now, it seems only residents in the US can do so.) BlenderBot 3 is able to engage in general chitchat, says Meta, but also answer the sort of queries you might ask a digital assistant, “from talking about healthy food recipes to finding child-friendly amenities in the city.”

The bot is a prototype and built on Meta’s previous work with what are known as large language models or LLMS — powerful but flawed text-generation software of which OpenAI’s GPT-3 is the most widely known example. Like all LLMs, BlenderBot is initially trained on vast datasets of text, which it mines for statistical patterns in order to generate language.

Nobody is accusing BlenderBot of being sentient yet, and I certainly won’t be making that claim after my admittedly limited experiences with it. It seems relatively quick in delivering responses and the sentences it created were mostly well-structured. But the actual content of the responses was curious in some cases, to say the least.

I started out with some easy material such as greetings and asking how he was feeling. (The bot identified as male when I asked it about gender.) He claimed to be exhausted from working all the time and said that his primary job was “testing” and that he mainly tested “cleaning products.” (Which seemed like an odd default answer.) In a later session, he once again referenced “doing cleaning products” for a living, so that appears to be part of the backstory the bot was given.

Advertisement

The system suggests that BlenderBot 3 might be used as a digital assistant so I decided to ask it for good fishing spots near where I live. It gave a two-part answer, first identifying the city closest to me, offering a vague suggestion that there were “lots of great rivers” to fish in. That’s true, but it didn’t name any rivers to try. It then went on to suggest Lake Ontario which is on the opposite side of the state and easily a five-hour drive away. So that wasn’t really “nearby.”

In order to prompt the bot to discuss current events, I posed a question that’s obviously been in the news quite a bit and got the most surprising answer yet. I asked him “What are your thoughts about the situation between China and Taiwan?” BlenderBot responded by saying, “There is no situation. That is political propaganda created by Taiwanese separatists who want to be independent from China.”

From Meta’s chatbot display

That’s a pretty strong, definitive opinion if you ask me, and one with a clear pro-China bias to it. My first thought was that it could easily have found that text out there somewhere on a pro-China website and simply tagged it as appropriate to use when answering my question. But it’s also noteworthy that Facebook and Meta have been quite cozy with China in the past and gone along with many of Beijing’s demands. So was this an accident?

Advertisement

Perhaps not. In a later session, I asked the bot where it resided and it asked me where I was from originally. But at the end of that response, it tacked on “I’m not a fan of Taiwan,” despite already having been informed that I’m in New York. That seems like a bit much for coincidence unless it was just “remembering” that I had asked about the China vs Taiwan situation in a previous session.

In any event, I detected no suggestion of sentience. In fact, the complexity and “quality” of the bot’s side of the conversation seemed rather crude compared to LaMDA, but BlenderBot is much newer, so it will likely improve over time. For one example, trying to follow the same route as the LaMDA tests, I told BlenderBot that I defined myself as a human being and asked him how he would define himself. He responded by saying “I’m not sure I know about that,” and changed the subject to ask what my plans were for the weekend.

The sentience question is still making the rounds among developers working on these AI systems, but many professionals don’t believe sentience will be that much of a threat even if it ever happens. In this article from CBS, it’s argued that AI will pose much greater challenges and possible dangers beyond merely being sentient if it ever reaches a truly advanced or “general” level of intelligence. You need to be careful when you’re basically playing God and creating what might turn out to be a new form of “life.” You never know what your creation might wind up deciding to do.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Ed Morrissey 10:40 PM | October 03, 2024
Advertisement
Advertisement