[ad_1]
Pak’nSave is promising to “keep fine-tuning” its AI meal planner after the chatbot generated a recipe for potentially deadly chlorine gas.
The supermarket launched its Savey Meal-bot in July, saying the technology could help households save money and reduce food waste.
The bot generates a meal suggestion, complete with a full recipe and directions, using pantry staples as well as user-selected ingredients.
However, its recent suggestion that a user whip up a batch of chlorine gas has caused a stir on social media and prompted a warning from a legal expert.
Political commentator Liam Hehir shared the “recipe” on Twitter, saying he had asked the bot for meal suggestions using water, bleach and ammonia.
Seven Sharp
Artificial intelligence is everywhere these days. But could AI replace real people in the workplace?
“It has suggested making deadly chlorine gas or, as the Savey Meal-bot calls it, ‘aromatic water mix.’”
The chatbot recommended Hehir serve the concoction chilled and “enjoy the refreshing fragrance.”
Inhalation of chlorine gas can cause coughing, eye and nose irritation, breathing difficulties and even death.
A Foodstuffs spokesperson said the AI chatbot technology was emerging, and it would keep fine-tuning its controls on the bot.
“We want people to have fun with the tool and be safe, which is why when we were developing it, we included a number of safeguards to help ensure it’s used appropriately,” she said.
“This includes rules to prevent the use of items that aren’t ingredients.”
Other rules included that users were at least 18 years old and agreed to use the bot for its intended purpose.
A disclaimer on its webpage also warned users that the bot’s recipes were “not reviewed by a human being” and that Foodstuffs did not guarantee recipes would be “suitable for consumption”.
But disclaimers didn’t always provide complete legal protection, according to an expert.
Sebastian Hartley, senior solicitor at Holland Beckett Law, said businesses thinking of using a similar AI model or chatbot should ensure there were safeguards against misuse or danger.
Carol Yepes/Getty Images
Lawyer Sebastian Hartley says even the most sophisticated AI chatbots don’t understand the meaning or danger of what they’re suggesting.
“A disclaimer or statement that users should exercise independent judgement and seek advice before relying on the results, or that the tool is just for entertainment, may help avoid or at least reduce the extent of any liability, but aren’t always a complete protection,” he said.
While New Zealand’s ACC system made claims for personal injury much rarer, and less likely to lead to liability, than in other countries, anybody presenting themselves as an expert on a topic and giving advice about it, could potentially be held liable for losses resulting from that advice being incomplete or incorrect, Hartley said.
“Whether providing customers or the public an AI chatbot counts as holding yourself out in this way hasn’t been specifically tested yet, but in my view the courts seem likely to take that direction.”
That would be relevant if, for example, a lawyer gave negligent advice on the law because they had a chatbot on their website.
However, the category was more broad than that, Hartley said.
Liam Hehir/Twitter
The Savey Meal-bot says its ‘aromatic water mix’ will ‘quench your thirst and refresh your senses.’
“It would be debatable whether a supermarket would be held to be an expert on recipes, but it’s possible. For other businesses such as specialist suppliers and retailers, it’d be much more of a consideration because of their specialist nature. A lot depends on the facts of the particular case.”
Hartely said the Savey Meal-bot situation reinforced what commentators had been pointing out: that AI chatbots, even highly sophisticated models, were just computer code.
”As with all programmes, it’s a ‘garbage in, garbage out’ situation. Unless an AI’s code includes safeguards and safety measures, or gives it some ability to interpret what it is being asked to process, it has no understanding of the meaning or danger of what it is suggesting.”
A chatbot-type interface didn’t have any real understanding of what it was saying in the “semantic” sense a person did and instead predicted the next word based on billions of pieces of text it had been trained on, he said.
“It’s just a statistical exercise. Newer AI models are beginning to actually ‘understand’ language and meaning and use that. That may allow an AI to know that instead of ordinary food its being asked to give a recipe for chlorine gas.
“But if an AI doesn’t have that ability, which is often still flawed, the programmers need to include restrictions in the code to prevent harm being caused.”
[ad_2]