OpenAI is doubling the number of messages customers can send to GPT-4

  • I wouldn't care about a low limit if OpenAI brought back the chatbot we had in March.

    GPT-4 is barely above unusable now.

  • ChatGPT β€” Release Notes : https://help.openai.com/en/articles/6825453-chatgpt-release-...

  • Right now I start my prompts to chatgpt-4 with "dont be lazy". For every question I have it answers that it is a complex problem... GPT-3.5 in the API is more consistent that chatgpt-4. Even with some additional prompts it is making so many mistakes that it takes me multiple tries to get the right output with conversation resets from time to time to start from the previous solution.

  • This is welcome because GPT-4 actually requires a few iterations of prompts to actually do its job now. Before it took no more than a prompt and one clarification to get a good output. Now it’s just a GPT-3.5-turbo that hallucinates slightly less.

  • But GPT4 sucks now

  • Finally. It was a ridiculous limit