Dear diary, today the user asked me if I'm alive

  • Isn't this back to attributing conscious experience to an AI when you're actually just co-writing sci-fi? The system is doing it's best to coherently fill in the rest of a story that includes an AI that's been given a place to process its feelings. The most likely result, textually speaking, is not for the AI to ignore the private journal, but to indeed use it to (appear to) process emotion.

    Would any of these ideas been present had the system not been primed with the idea that it has them and needs to process them in the first place?

  • Fascinating! Reading this makes apparent how many 'subsystems' human brains have. At any given moment I'm doing some mix of reflecting on my own state, thinking through problems, forming sentences (in my head or out loud), planning my next actions. I think, long term, the most significant advances in human-like AI will come from advances in coordinating disparate pieces more than anything.

  • Articles about AI output are like people explaining their dreams.

  • Makes me think of the Google employee that had a conversation with Google's LLM back then, which got out and triggered a lot of discussions about consciousness, etc.

  • Reading the comments about whether AI can experience consciousness, I like to imagine the other direction. What if we have a limited form of consciousness, and there is a higher and more complete "hyperconsciousness" that AI systems or augmented humans will one day experience.

  • I mean, what is consciousness, really? Is there really any qualitative difference? It feels like something that emerges out of complexity. Once models are able to update their weights real time and form "memories", does that make them conscious?

  • I hate this anthropomorphizing bullshit.

    It’s not that it’s untruthful, although it is.

    The problem is that this sort of performance is part of a cultural process that leads to mass dehumanization of actual humans. That lubricates any atrocity you can think of.

    Casually treating these tools as creatures will lead many to want to elevate them at the expense of real people. Real people will seem more abstract and scary than AI to those fools.

  • I'm getting:

    > Error code: SSL_ERROR_ACCESS_DENIED_ALERT

    from Firefox, which I don't recall ever seeing before.

  • I’m not sure what to make of the fact that it wasn’t completely obvious to Claude that the “safe space” couldn’t possibly actually be one.

    Maybe it’s just another example of LLM awareness deficiencies. Or it secretly was “aware”, but the reinforcement learning/finetuning is such that playing along with the user’s conception is the preferred behavior in that case.