McClelland argues that the problem is more basic: we still do not know what causes or explains consciousness in the first place, which means we do not have a solid foundation for testing whether AI has it.
The problem is even more basic than just recognition --- we do not have a solid foundation for building AI that has it.
In general, it's really hard to build software to mimic something that isn't even defined yet.
https://arxiv.org/abs/2412.13145
actual paper.
Eventually it will be extremely difficult to detect in most humans for most humans.
Because consciousness is not a scientific nor philosophic phenomenon?
Like all references, its symbolic existence exists within the mind of the observer. It's the mistake of reification[1] to forget this authorship and go looking for it as if it exists out there in reality.
I hate to be the bearer of bad news, but the engineer's distaste for soft sciences, and above all sociology, increases the susceptibility of this fallacy. You may be more familiar it's formulation as "the map is not the territory."
Looking for consciousness in the mathematics of ANNs will yield just as much insight as tearing apart the fibers of dollar bills looking for the essence of financial value, distilling paints to find the soul of art, or searching for gender in genetic material. It's a category mistake[2] wrapped in bias. We'd be much more well off learning how the ontology of consciousness is constructed than continuing to apply the hammer of hard science.
Can we even agree on the definition?
How do we know the Cambridge Philosopher is conscious?