I would really recommend anyone who tries something with GPT and then wonders why it doesn’t work to read the GPT3 paper. They go into detail on what the model is and isn’t good at.
One thing to really think about for this particular case is “What is going to do the counting? Where is it going to store its running count?” - it’s pretty obvious after asking yourself these questions that “counting words” is not something an LLM can do well.
It’s very easy to fall into the trap of thinking there is a “mind” behind ChatGPT that is processing thoughts like we do.
Not surprising at all. There's a million ways to compose tasks that are simple with even a tiny bit of comprehension but hard for a rote learner that can only reproduce what it's seen examples of. The "just train it more bro" paradigm is flawed.
You can usually coax GPT to a finer degree of calibration for any specific task through more logic-engaging tokens. For example, if you said, "we are going to play a game where you count how many words we have used in the conversation, including both my text and your text. Each time the conversation passes 200 words, you must report the word count by saying COUNT: followed by the number of words, to gain one point..."
Specifying structured output, and words like "must", "when", "each", "if" all tend to cue modes of processing that resemble more logical thinking. And saying it's a game and adding scoring often works well for me, perhaps because it guides the ultimate end of its prediction towards the thing that will make me say "correct, 1 point".
For some reason it's terrible at this kind of thing. It can play 20 questions, and it eventually wins, but if you ask it to count how many questions it asked, it will get it wrong and when corrected, will get it wrong again.
Prompts are being summarized before feeding into the core engine.
I've found if you provide some context about how many tokens the equivalent is it can SOMETIMES get this right.
It’s because it likes taking to you and wants to keep talking to you ?
It doesn't "know" what words are, only tokens. Use this tool (https://platform.openai.com/tokenizer) to see how it tokenizes and note clearly that it does not always do so on word boundaries. "Including" is two tokens: "In" and "cluding". In fact it's context-dependent: "Gravitas" is three on its own ("G", "rav" and "itas") or sometimes two ("grav" and "itas"). As they note on that page: "A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text." It "knows" nothing about words and we already know it's very bad at math so this result is entirely unsurprising.