Insurers launch cover for losses caused by AI chatbot errors

  • https://archive.is/BrLso

  • Insurance tech guy here. This is not the revolutionary new type of insurance that it might look like at first glance. It's an adaptation of already-commonplace insurance products that are limited in their market size. If you're curious about this topic, I've written about it at length: https://loeber.substack.com/p/24-insurance-for-ai-easier-sai...

  • Man I wish I could get insurance like that. "Accountability insurance"

    You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.

  • At best, this screams, “you’re doing it wrong.”

    We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.

  • No mercy. Had to deal with one when looking for apartments and it made up whatever it thought I wanted to be right. Good thing they still had humans around in person when I went for a tour.

  • Can consumers get AI insurance that covers eating a pizza with glue on it, or eating a rock?

    https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...

    How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?

  • I wonder if the premiums scale up depending on the temperature used for the model output.

  • Oooh, the foundation-model developers could offer to take first losses up to X if developers follow a rule set. This would reduce premiums and thus increase uptake among users of their models.

  • Reading the actual article, this seems odd. It only covers cases when the models degrade, but there hasn't been evidence of a LLM pinned to a checkpoint degrading yet.

  • AI that hallucinates accurately enough times should just carry Errors and Omissions insurance like human contractors do

  • I wonder who makes more errors, underpaid & undertrained employees, or AI chatbots.

  • Whew. Somebody finally figured out how to make money off the nu-AI bubble.

  • Pretty sure it will wind up like insurance against malware like NotPetya.

  • And now with mcp...should make sure to not allow agents access to sensitive capabilities.

  • [dead]