An artificial intelligence-powered chatbot created by New York City to help small business owners is criticized for dispensing bizarre advice that misstates local policies and advises companies to violate the law.
The MyCity AI chatbot uses Microsoft's Azure AI large language models but is plagued by the same issues that most AI models suffer from, including giving wrong and misleading information.
Days after the issues were first reported last week by tech news outlet The Markup, the city has opted to leave the tool on its official government website.
In response to the criticisms, Mayor Adams stated that the AI tool is "wrong in some areas, and we've got to fix it." He further defended the chatbot, saying that "any time you use technology, you need to put it into the real environment to iron out the kinks.”
The chatbot was previously reported as telling landlords that they were free to discriminate based on income and that business owners could take workers' tips as their own—despite both of these practices being illegal in New York.
Subsequent probes of the chatbot's responses on Thursday revealed that it was still giving out incorrect information. Reuters reported that the AI had said that stores no longer had to accept cash as a payment method, even though this violates New York law.
The AI chatbot seems like an especially egregious waste of resources. AI chatbots built on large language models are notoriously prone to spitting out errors, a shortcoming that some experts think will never be fully fixed. And what's the point of using a chatbot if it will give you incorrect information that may lead you to break the law? Couldn't the money for this tech go towards something decidedly low-tech yet beloved by all New Yorkers?