Maybe we can extrapolate Godel where a complete ruleset cannot be consistent, and a consistent ruleset cannot be complete.
Somewhere in there is the human ingenuity to adapt certain pathological patterns [1] that defeat (current) AI.
But it's not always obvious, and the composer sapiens must defy their own understanding of the rules to create a purposeful deviation (sometimes with help from other AI) [2].
The idea of sharing GPT prompts promotes the implication that results are deterministic. That helps QA, but humans love to play.
Maybe we can extrapolate Godel where a complete ruleset cannot be consistent, and a consistent ruleset cannot be complete.
Somewhere in there is the human ingenuity to adapt certain pathological patterns [1] that defeat (current) AI.
But it's not always obvious, and the composer sapiens must defy their own understanding of the rules to create a purposeful deviation (sometimes with help from other AI) [2].
The idea of sharing GPT prompts promotes the implication that results are deterministic. That helps QA, but humans love to play.
[1] Adversarial Policies Defeat Superhuman Go AIs: https://arxiv.org/abs/2211.00241
[2] Kellin Pelrine