This sounds similar to the guy that xeeted about fooling a car dealership chatbot into selling him a car for $1. Different jurisdiction though.
https://venturebeat.com/ai/a-chevy-for-1-car-dealer-chatbots...
… But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot was a separate legal entity “responsible for its own actions,”….
How exactly do you go about making a chatbot a legal entity?
discussed here https://news.ycombinator.com/item?id=39378235
Companies will probably stop using these kind of chat bots if people keep exploiting them. Not that anyone should do that. But I'm just saying that's probably what they would do.
An human would not fall for that. Chatbots can be tricked and exploited.
This is correct. For any law to function, responsibility needs to propagate through the use of any tool. If a company is the legal entity responsible for making the decision to deploy a chatbot as a support service, they must be responsible for what that chatbot says. This responsibility should also flow through corporations to the people who had the power to make decisions about how the corporation operates, but I'll take it as a small blessing that we're seeing an unwillingness to set precedent that further allows indirection to evaporate responsibility