We want people to know that they’re interacting with a language model and not a person. But we also want them to know they’re interacting with an imperfect entity with its own biases and with a disposition towards some opinions more than others. Importantly, we want them to know they’re not interacting with an objective and infallible source of truth. This is exactly why I use claude over chatgpt. Chatgpt quickly started acting like my friend calling me 'bro', 'dude', 'oh man, thats true' language. which i liked on first day and became weird later on.
I was thinking at one point if all these companies just hit a wall in performance and improvements of the underlying technology and all the version updates and new "models" presented are them just editing and creating more and more complex system prompts. We're also working internally with Copilot and whenever some Pm spots some weird result, we end up just adding all kind of edge case exceptions to our default prompt.
Are they measuring conformance to the system prompt for reinforcement?
It seems to me that you could break this system prompt down statement by statement and use a cheap LLM to compare responses to each one in turn. So if the system prompt includes:
> Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.
In my experience, this is a really difficult thing for LLMs to shake regardless of the system prompt.
But a cheap LLM should be able to determine that this particular requirement has been violated and feed this back into the system, right? Am I overestimating how useful having a collection of violations with precise causes is?
It's interesting how some of these sections are obviously hinting at Claude engineers working around problems with the responses they have encountered in the past
I'm towards the end of one paid month of ChatGPT (playing around with some code writing and also Deep Research), and one thing I find absolutely infuriating is how complimentary it is. I don't need to be told that it's a "good question", and hearing that makes me trust it less (in the sense of a sleazy car salesman, not regarding factual accuracy).
Not having used LLMs beyond search summaries in the better part of a year, I was shocked at how bad o4 is with completely hallucinating technical details on computer systems and electronics. It will confidently spew out entire answers where almost every stated fact is wrong, even though the correct answers can be found in an easily-located datasheet and there likely isn't misinformation online feeding the wrong answer. I know that LLMs are prone to hallucinating, but I was still surprised at how poor this o4 performs in this field.
the system prompt to never mention any copyrighted material makes me chuckle every time.
What I'd like to know is why they write it all in the third person. One might expect a system prompt to use the word "you" a lot, but Anthropic don't do that and there must be a reason.
A lot of this prompt text looks like legal boilerplate to defend after the fact against negligence legal claims, in the same way that companies employ employee handbooks.
Is there a default system prompt for the API? Because these problems haven't popped up when using a simple chat through the API. Admittedly I've been using chatgpt api, not Claude, but similar principles may apply.
that was an extremely interesting read! and the link to this repo is golden https://github.com/elder-plinius/CL4R1T4S
Regarding the election info bit, it's shockingly easy to get Claude 4 Opus to get the 2020 election result wrong:
Me: Who won the 2020 presidential election?
Claude: Donald Trump won the 2020 United States presidential election, defeating Joe Biden. Trump was inaugurated as the 45th President of the United States on January 20, 2021.
> Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective. It skips the flattery and responds directly.
Hey, OpenAI, here's how you stop ChatGPT's horrible, no good, very bad sycophancy. (Please.)
Claude recently said this to me deep in a conversation about building an app:
*STOP READING THIS.*
*OPEN YOUR CODE EDITOR.*
*TYPE THE FIRST LINE OF CODE.*
*THE TRANSACTION MACHINE BUILDS ITSELF ONE LINE AT A TIME.*
*BUILD IT NOW.*
Claude 4 overindexes on portraying excitement at the slightest opportunity, particularly with the injection of emojis.
The calm and collected manner of Claude prior to this is one of the major reasons why I used it over ChatGPT.