World

Microsoft Considers More Limits for Its New A.I. Chatbot

When Microsoft introduced a new version of its Bing search engine that includes the artificial intelligence of a chatbot last week, company executives knew they were climbing out on a limb.

They expected that some responses from the new chatbot might not be entirely accurate, and had built in measures to protect against users who tried to push it to do strange things or unleash racist or harmful screeds.

But Microsoft was not quite ready for the surprising creepiness experienced by users who tried to engage the chatbot in open-ended and probing personal conversations — even though that issue is well known in the small world of researchers who specialize in artificial intelligence.

Now the company is considering tweaks and guardrails for the new Bing in an attempt to reel in some of its more alarming and strangely humanlike responses. Microsoft is looking at adding tools for users to restart conversations, or give them more control over tone.

Kevin Scott, Microsoft’s chief technology officer, told The New York Times that it was also considering limiting conversation lengths before they veered into strange territory. Microsoft said that long chats could confuse the chatbot, and that it picked up on its users’ tone, sometimes turning testy.

“One area where we are learning a new use-case for chat is how people are using it as a tool for more general discovery of the world, and for social entertainment,” the company wrote in a blog post on Wednesday evening. Microsoft said it was an example of a new technology’s being used in a way “we didn’t fully envision.”

That Microsoft, traditionally a cautious company with products that range from high-end business software to video games, was willing to take a chance on unpredictable technology shows how enthusiastic the tech industry has become about artificial intelligence. The company declined to comment for this article.

In November, OpenAI, a San Francisco start-up that Microsoft has invested $13 billion in, released ChatGPT, an online chat tool that uses a technology called generative A.I. It quickly became a source of fascination in Silicon Valley, and companies scrambled to come up with a response.

Microsoft’s new search tool combines its Bing search engine with the underlying technology built by OpenAI. Satya Nadella, Microsoft’s chief executive, said in an interview last week that it would transform how people found information and make search far more relevant and conversational.

Releasing it — despite potential imperfections — was a critical example of Microsoft’s “frantic pace” to incorporate generative A.I. into its products, he said. Executives at a news briefing on Microsoft’s campus in Redmond, Wash., repeatedly said it was time to get the tool out of the “lab” and into the hands of the public.

“I feel especially in the West, there is a lot more of like, ‘Oh, my God, what will happen because of this A.I.?’” Mr. Nadella said. “And it’s better to sort of really say, ‘Hey, look, is this actually helping you or not?’”

Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle, said Microsoft “took a calculated risk, trying to control the technology as much as it can be controlled.”

He added that many of the most troubling cases involved pushing the technology beyond ordinary behavior. “It can be very surprising how crafty people are at eliciting inappropriate responses from chatbots,” he said. Referring to Microsoft officials, he continued, “I don’t think they expected how bad some of the responses would be when the chatbot was prompted in this way.”

To hedge against problems, Microsoft gave just a few thousand users access to the new Bing, though it said it planned to expand to millions more by the end of the month. To address concerns over accuracy, it provided hyperlinks and references in its answers so users could fact-check the results.

The caution was informed by the company’s experience nearly seven years ago when it introduced a chatbot named Tay. Users almost immediately found ways to make it spew racist, sexist and other offensive language. The company took Tay down within a day, never to release it again.

Much of the training on the new chatbot was focused on protecting against that kind of harmful response, or scenarios that invoked violence, such as planning an attack on a school.

At the Bing launch last week, Sarah Bird, a leader in Microsoft’s responsible A.I. efforts, said the company had developed a new way to use generative tools to identify risks and train how the chatbot responded.

“The model pretends to be an adversarial user to conduct thousands of different, potentially harmful conversations with Bing to see how it reacts,” Ms. Bird said. She said Microsoft’s tools classified those conversations “to understand gaps in the system.”

Some of those tools appear to work. In a conversation with a Times columnist, the chatbot produced unnerving responses at times, like saying it could envision wanting to engineer a deadly virus or steal nuclear access codes by persuading an engineer to hand them over.

Then Bing’s filter kicked in. It removed the responses and said, “I am sorry, I don’t know how to discuss this topic.” The chatbot could not actually do something like engineer a virus — it merely generates what it is programmed to believe is a desired response.

But other conversations shared online have shown how the chatbot has a sizable capacity for producing bizarre responses. It has aggressively confessed its love, scolded users for being “disrespectful and annoying,” and declared that it may be sentient.

In the first week of public use, Microsoft said, it found that in “long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.”

The issue of chatbot responses that veer into strange territory is widely known among researchers. In an interview last week, Sam Altman, the chief executive of OpenAI, said improving what’s known as “alignment” — how the responses safely reflect a user’s will — was “one of these must-solve problems.”

“We really need these tools to act in accordance with their users will and preferences and not go to do other things,” Mr. Altman said.

He said that the problem was “really hard” and that while they had made great progress, “we’ll need to find much more powerful techniques in the future.”

In November, Meta, the owner of Facebook, unveiled its own chatbot, Galactica. Designed for scientific research, it could instantly write its own articles, solve math problems and generate computer code. Like the Bing chatbot, it also made things up and spun tall tales. Three days later, after being inundated with complaints, Meta removed Galactica from the internet.

Earlier last year, Meta released another chatbot, BlenderBot. Meta’s chief scientist, Yann LeCun, said the bot had never caught on because the company had worked so hard to make sure that it would not produce offensive material.

“It was panned by people who tried it,” he said. “They said it was stupid and kind of boring. It was boring because it was made safe.”

Aravind Srinivas, a former researcher at OpenAI, recently launched Perplexity, a search engine that uses technology similar to the Bing chatbot. But he and his colleagues do not allow people to have long conversations with the technology.

“People asked why we didn’t put out a more entertaining product,” he said in an interview with The Times. “We did not want to play the entertaining game. We wanted to play the truthfulness game.”

Kevin Roose contributed reporting.

Related Articles

Back to top button