> That’s it. “Read 3 files.” Which files? Doesn’t matter. “Searched for 1 pattern.” What pattern? Who cares.
Product manager here. Cynically, this is classic product management: simplify and remove useful information under the guise of 'improving the user experience' or perhaps minimalism if you're more overt about your influences.
It's something that as an industry we should be over by now.
It requires deep understanding of customer usage in order not to make this mistake. It is _really easy_ to think you are making improvements by hiding information if you do not understand why that information is perceived as valuable. Many people have been taught that streamlining and removal is positive. It's even easier if you have non-expert users getting attention. All of us here at HN will have seen UIs where this has occurred.
https://github.com/anthropics/claude-code/issues/8477
https://github.com/anthropics/claude-code/issues/15263
https://github.com/anthropics/claude-code/issues/9099
https://github.com/anthropics/claude-code/issues/8371
It's very clear that Anthropic doesn't really want to expose the secret sauce to end users. I have to patch Claude every release to bring this functionality back.
I’m a heavy Claude code user and it’s pretty clear they’re starting to bend under their vibe coding. Each Claude code update breaks a ton of stuff, has perf issues, etc.
And then this. They want to own your dev workflow and for some reason believe Claude code is special enough to be closed source. The react TUI is kinda a nightmare to deal with I bet.
I will say, very happy with the improvements made to Codex 5.3. I’ve been spending A LOT more time with codex and the entire agent toolchain is OSS.
Not sure what anthropic’s plan is, but I haven’t been a fan of their moves in the past month and a half.
Claude's brand is sliding dangerously close to "the Microsoft of AI."
DEVELOPERS, DEVELOPERS, DEVELOPERS, DEVELOPERS
I write mainly out of the hope that some Anthropic employees read this: you need an internal crusade to fight these impulses. Take the high road in the short-term and you may avoid being disrupted in the long-term. It's a culture issue.
Probably your strongest tool is specifically educating people about the history. Microsoft in the late 90s and early 00s was completely dominant, but from today's perspective it's very clear: they made some fundamental choices that didn't age well. As a result, DX on Windows is still not great, even if Visual Studio has the best features, and people with taste by and large prefer Linux.
Apple made an extremely strategic choice: rebuild the OS around BSD, which set them up to align with Linux (the language of servers). The question is: why? Go find out.
The difference is a matter of sensibility, and a matter of allowing that sensibility to exist and flourish in the business.
I'm old, so I remember when Skyrim came out. At the time, people were howling about how "dumbed down" the RPG had become compared to previous versions. They had simplified so many systems. Seemed to work out for them overall.
I understand the article writers frustration. He liked a thing about a product he uses and they changed the product. He is feeling angry and he is expressing that anger and others are sharing in that.
And I'm part of another group of people. I would notice the files being searched without too much interest. Since I pay a monthly rate, I don't care about optimizing tokens. I only care about the quality of the final output.
I think the larger issue is that programmers are feeling like we are losing control. At first we're like, I'll let it auto-complete but no more. Then it was, I'll let it scaffold a project but not more. Each step we are ceding ground. It is strange to watch someone finally break on "They removed the names of the files the agent was operating on". Of all of the lost points of control this one seems so trivial. But every camels back has a breaking point and we can't judge the straw that does it.
There are a lot of non developer claude code users these days. The hype about vibe coding lets everyone think they can now be an engineer. Problem is if anthropic caters to that crowd the devs that are using it to do somewhat serious engineering tasks and don't believe in the "run an army of parallel agents and pray" methodology are being alienated.
Maybe Claude Code web or desktop could be targeted to these new vibe coders instead? These folks often don't know how simple bash commands work so the terminal is the wrong UX anyway. Bash as a tool is just very powerful for any agentic experience.
All my information about this is being based on feels, because debugging isn't really feasible. Verbose mode is a mess, and there's no alternative.
It still does what I need so I'm okay with it, but I'm also on the $20 plan so it's not that big of a worry for me.
I did sense that the big wave of companies is hitting Anthropic's wallet. If you hadn't realized, a LOT of companies switched to Claude. No idea why, and this is coming from someone who loves Claude Code.
Anyway, getting some transparency on this would be nice.
I absolutely love reading thoughts and see the commands it uses. It teaches me new stuff, and I think this is what young people need: be able to know WHAT it is doing and WHY it is doing it. And have the ability to discuss with another agent about what the agent and me are trying to archive, and we can ask them questions we have without disturbing the flow, but seeing the live output.
Regarding the thoughts: it also allows me to detect problematic paths it takes, like when it can't find a file.
For example today I was working on a project that depends on another project, managed by another agent. While refactoring my code it noticed that it needs to see what this command is which it is invoking, so it even went so far as to search through vs code's user data to find the recent files history if it can find out more about that command... I stopped it and told it that if it has problems, it should tell me. It explained it can't find that file, i gave it the paths and tokens were saved. Note that in that session I was manually approving all commands, but then rejected the one in the data dir.
Why dumb it down?
They don’t seem to realize that doing vibe coding requires enough information to get the vibes.
There are no vibes in “I am looking at files and searching for things” so I have zero weight to assign to your decision quality up until the point where it tells me the evals passed at 100%.
Your agent is not good enough. I trust it like I trust a toddler not to fall into a swimming pool. It’s not trying to, but enough time around the pool and it is going to happen, so I am watching the whole time, and I might even let it fall in if I think it can get itself out.
For a general tool that has such a broad user base, the output should be configurable. There's no way a single config, even with verbose mode, will satisfy everyone.
Set minimal defaults to keep output clean, but let users pick and choose items to output across several levels of verbosity, similar to tcpdump, Ansible, etc. (-v to -vvvvv).
I know businesses are obsessed with providing Apple-like "experiences", where the product is so refined there's just "the one way" to magically do things, but that's not going to work for a coding agent. It needs to be a unix-like experience, where the app can be customized to fit your bespoke workflow, and opening the man page does critical damage unless you're a wizard.
LLMs are already a magic box, which upsets many people. It'll be a shame if Anthropic alienates their core fan base of SWEs by making things more magical.
It's pretty interesting to watch AI companies start to squeeze their users as the constraints (financial, technical, capacity-wise) start to squeeze the companies.
Ads in ChatGPT. Removing features from Claude Code. I think we're just beginning to face the music. It's also funny that how Google "invented" ad injection in replies with real-time auction capabilities, yet OpenAI would be the first implementer of it. It's similar to how transformers played out.
For me, that's another "popcorn time". I don't use any of these to any capacity, except Gemini, which I seldom use to ask stuff when deep diving in web doesn't give any meaningful results. The last question I asked managed to return only one (but interestingly correct) reference, which I followed and continued my research from there.
Meanwhile GPT-5.3-Codex which just released recently is a huge change and much better. It now displays intermediate thinking summaries instead of being silent.
Sounds like the compacting issue.
> Compacting fails when the thread is very large
> We fixed it.
> No you did not
> Yes now it auto compacts all messages.
> Ok but we don't want compaction when the thread isn't large, plus, it still fails when the compacted thread is too large
> ...
This was really useful; sometimes, by a glance, you'd see Claude looking at the wrong files or searching the wrong patterns, and would be able to immediately interrupt it. For those of us who like to be deeply involved in what Claude is doing, those updates were terribly disappointing.
I agree the quality of Claude Code recent has felt poor and frustrating.
I’ve been persistently dealing with the agent running in circles on itself when trying to fix bugs, not following directions fully and choosing to only accomplish partial requests, failing to compact and halting a session, and ignoring its MCP tooling and doing stupid things like writing cruddy python and osascripts unnecessarily.
I’ve been really curious about codex recently, but I’m so deep into Claude Code with multiple skills, agents, MCPs, and a skill router though.
Can anyone recommend an easy migration path to codex as a first time codex user from Claude code?
Absolutely worse than dumbed down, 4.6 is a mess. Ask it the simplest of questions, look away, and come back to 700 parallel tool uses. https://old.reddit.com/r/ClaudeAI/comments/1r1cfha/is_anyone...
I also found this change annoying.
Often a codebase ends up with non-authoritative references for things (e.g. docs out of sync with implementation, prototype vs "real" version), and the proper solution is to fix and/or document that divergence. But let's face it, that doesn't always happen. When the AI reads from the wrong source it only makes things worse, and when you can't see what it's reading it's harder to even notice that it's going off track.
Working at Microsoft, I've just now hooked up to Claude Code (my department was not permitted to use it previously), through something called "Agent Maestro", a vscode extension which I guess pipes claude code API requets to our internally hosted Claude models, including Opus 4.6.
I do wonder if there is going to be much of a difference between using Claude Code vs. Copilot CLI when using the same models.
Vibe-coders griping about Claude's vibe-coded CLI hits all the right vibes.
This shows one problem here: a private entity controls Claude Code. You can reason that it brings benefits (perhaps), but to me it feels wrong to allow my thinking or writing code be controlled by a private entity. Perhaps I have been using Linux for too long - I may turn into RMS 2.0 (not really though, I like BSD/MIT licences too).
$200 a month? I buy compute credits as needed and have used maybe $300 in a year
Hey... I have been experimenting with Claude for a few days, and am not thrilled with it compared to web chatbots. I suspect this is partly me being new and unskilled with it, but this is a general summary.
ChatGPT or Gemini: I ask it what I wish to do, and show it the relevant code. It gives me a often-correct answer, and I paste it into my program.
Claude: I do the same, and it spends a lot of time thinking. When I check the window for the result, it's stalled with a question... asking to access a project or file that has nothing to do with the problem, and I didn't ask it to look for. Repeat several times until it solves the problem, or I give up with the questions.
My last experience with Claude support was a fun merry go round.
I had used a Visa card to buy monthly Pro subscription. One day I ran out of credits so I go to buy extra credit. But my card is declined. I recheck my card limit and try again. Still declined.
To test the card I try extending the Pro subscription. It works. That's when I notice that my card has a security feature called "Secure by Visa". To complete transaction I need to submit OTP on a Visa page. I am redirected to this page while buying Pro subscription but not when trying to buy extra usage.
I open a ticket and mention all the details to Claude support. Even though I give them the full run down of the issue, they say "We have no way of knowing why your card was declined. You have to check with your bank".
Later I get hold of a Mastercard with similar OTP protection. It is called Mastercard Securecode. The OTP triggers on both subscription and extra usage page.
I share this finding with support as well. But the response is same - "We checked with our engineering team and we have no way of knowing why the other Visa card was declined. You have to check with your bank".
I just gave up trying to buy extra usage. So, I am not really surprised if they keep making the product worse.
I like claude models, but crush and opencode are miles ahead of claude code. It's a pity anthropic forces us to use inferior tooling (I'm on a "team" plan from work). I can use an API key instead but then I'll blow past 25$ in an hour.
https://github.com/anthropics/claude-code/issues/24537
Seems like a dashboard mode toggle to run in a dedicated terminal would be a good candidate to move some of this complexity Anthropic seems to think “most” users can’t handle. When your product is increasing cognitive load the answer isn’t always to remove the complexity entirely. That decision in this case was clearly the wrong one.
Strong meme game. I'm on an older release and now I'm reluctant to update. In my current release, the verbosity is just where I want it and control-o is there when I really need it.
If you've not, I recommend giving Opus[1m] + teams a shot, warning it's hella expensive but holy cow... what a tool.
I really dislike this trend that unfortunately has become, well, a trend. And has followers. Namely, let's simplify to "reduce noise" and "not overwhelm users", because "the majority of users don't need…".
This is spreading like a plague: browser address bars are being trimmed down to nothing. Good luck figuring out which protocol you're using, or soon which website you are talking to. The TLS/SSL padlock is gone, so is the way to look into the site certificate (good luck doing that on recent Safari versions). Because users might be confused.
Well the users are not as dumb as you condescendingly make them out to be.
And if you really want to hide information, make it a config setting. Ask users if they want "dumbo mode" and see if they really do.
Like any CLI Claude Code should follow decades old tradition of providing configurable verbosity levels, like tcpdump's -v to -vvvvv to accommodate varying usage contexts.
I don't get why people cling to the Claude Code abusive relationship. It's got so many issues, it's getting worse, and it's clear that there's no plan to make it open for patching.
Meanwhile OpenCode is right there. (despite Anthropic efforts, you can still use it with a subscription) And you can tweak it any way you want...
Perhaps some power user of Claude Code can enlighten me here, but why not just using OpenCode? I admit I've only briefly tried Claude Code, so perhaps there are unique features there stopping the switch, or some other form of lock-in.
LOL, no, dumbing down was when I paid two months of subscription with the model literally struggling to write basic functions. Something Anthropic eventually acknowledged but offered no refunds for. https://ilikekillnerds.com/2025/09/09/anthropic-finally-admi...
I care A LOT about the details, and I couldn't care less that they're cleaning up terminal output like this.
We're having a UI argument about a workflow problem.
We treat a stateless session like a colleague, then get upset when it forgets our preferences. Anthropic simplified the output because power users aren't the growth vector. This shouldn't surprise anyone.
The fix isn't verbose mode. It's a markdown file the model reads on startup — which files matter, which patterns to follow, what "good" looks like. The model becomes as opinionated as your instructions. The UI becomes irrelevant.
The model is a runtime. Your workflow is the program. Arguing about log verbosity is a distraction.
> “Read 3 files.” Which files?
> “Searched for 1 pattern.”
Hit Ctrl-o like it mentions right there, and Claude Code will show you. Or RTFM and adjust Output Styles[1]. If you don't like these things, you can change them.
Like it or not, agentic coding is going mainstream and so they are going to tailor the default settings toward that wider mainstream audience.
Serous question - why do people stick with Clause Code over Cursor? With Cursors base subscription I have access to pretty much all the Frontier models and can pick and choose. Anthropic models haven’t been my go-to in months, Gemini and Codex produce much better results for me.
And they hate that people are using different agents (like opencode) with their subscription - to the extent that they have actively been trying to block it.
With stupidity like this what do they expect? It’s only a matter of time before people jump ship entirely.
If you're not vibecoding your own UX to render CC's output the way you like it, you're not living.
My biggest beef in recent versions is the automatic use of generic built in skills. I hate it when I ask a simple question and it says "OK! Time to use the RESEARCHING_CRAZY_PROBLEM skill! I'll kickstart the 20 step process!" when before it would just answer the question.
You can control this behavior, so it's not a dealbreaker. But it shows a sort of optimism that skills make everything better. My experience is that skills are only useful for specific workflows, not as a way to broadly or generally enhance the LLM.
Anthropic is optimizing for enterprise contracts, not hacker cred. This is what happens when you take VC money and need to sell to Fortune 500s. The "dumbing down" is just the product maturing beyond the early adopter phase.
I'm not sure this is a regression, at least how I use it - you can hit control + o to expand, and usually the commands it runs show the file path(s) it's using, and I'm really paranoid with it, and I didn't even notice this change.
If you've got a solution to the problem of bad decisions made by people who shouldn't be empowered to make them in the first place, you'll solve more than Claude Code.
It's clear we're seeing the same code-vs-craft divergence play out as before, just at a different granularity.
Codex/Claude would like you to ignore both the code AND the process of creating the code.
this has got to be one of the worst comments sections i've ever seen on HN... people shouting past each other... into the void...
It's nerfed to a point that it feels more like lawyer than a coding assistant now. We were arguing about an 3rd party API ToU for 1 hour last night. VSC Copilot executed it within 1 minute.
can't you write some tool to display the files being read with the inotify system call?
Usually I hate programming but it feels like a nice little tool to create
Hilarious! Anthropic can just vibe code the boolean flag in.
So much for human replacement.
Map it to a workplace:
- Hey Joe, why did you stop adding code diff to your review requests?
- Most reviewers find it simpler. You can always run tcpdump on our shared drive to see what exactly was changed.
- I'm the only one reviewing your code in this company...
I have noticed, if I hit my session quota before it resets, that Claude gets "sleepy" for a day or so afterward. It's demonstrably worse at tasks...especially complex ones. My cofounder and I have both noticed this.
Our theory is that Claude gets limited if you meet some threshold of power usage.
It was because of the (back then) new Haiku model, maybe 3.5, that i decided to subscribe yearly. more than good enough for a language layer to interact with the mcp server. Now I'm even hesitant to use it.
Everyone, file your own ticket (check the box saying you searched for existing tickets anyway)!
After the Anthropic PMs have to delete their hundredth ticket about this issue, they will feel the need to fix it ... if only to stop the ticket deluge!
claude code is big enough now that it really needs a preview / beta release channel where features like this can be tested against a smaller audience before being pushed out.
as a regular and long-term user, it's frequently jarring being pushed new changes / bugs in what has become a critical tool.
surprised their enterprise clients haven't raised this
The histrionic tone is annoying but this is actually a feature failure. The utility of seeing what files were being read is I could help direct its use if it goes down the wrong pathway. I use a monorepo so that's an easy mistake for the software to make.
I find it hard to care about claims of degradation of quality, since this has been a firehouse of claims that don't map onto anything real and is extremely subjective. I myself made the claim in error. I think this is just as ripe for psychological analysis as anything else.
> That’s it. “Read 3 files.” Which files? Doesn’t matter.
It doesn't say "Read 3 files." though - it says "Read 3 files (ctrl+o to expand)" and you press ctrl+o and it expands the output to give you the detail.
It's a really useful feature to increase the signal to noise ratio where it's usually safe to do so.
I suspect the author simply needs to enable verbose mode output.
What a weird hill to die on
Give me my local models so I can write a locally handcrafted tool that does what I want, goddamit.
Since last Friday it’s felt like CC rolled back a year of progress. Not sure what to attribute it to, or what this article seems to be about but it _felt_ much dumber.
RooCode is a better version of ClaudeCode than ClaudeCode.
No affiliation, just a fan.
I thought this was going to talk about a nerfed Opus 4.6 experience. I believe I experienced one of those yesterday. I usually have multiple active claude code sessions, using Opus 4.6, running. The other sessions were great, but one session really felt off. It just felt much more dumbed down than what I was used to. I accidentally gave that session a "good" feedback, which my inner conspiracy theorist immediately jumps to a conclusion that I just helped validate a hamstrung model in some A/B test.
What if it’s used with a different harness, e.g. Opencode?
What happens when you press ctrl+o? You get verbose mode?
Another instance of devs being out of touch is them wanting Claude Code to respect AGENT.md: https://github.com/anthropics/claude-code/issues/6235
What’s wrong with you, people? Are you stupid?
As soon as there is a viable alternative to Claude Code, I'm gone after this change. It appears minor on the surface but their response to all the comments tells you everything you need to know. They don't even want to concede at all, or at least give a flag to enable the old behavior, what was deployed and working for many users before. It's a signal that someone, somewhere at Anthropic is making decisions based on ego, not user feedback.
The other fact pattern is their CLI is not open source, so we can't go in and change it ourselves. We shouldn't have to. They have also locked down OpenCode and while there are hacks available, I shouldn't have to resort to such cat and mouse games as someone who pays $200/month for a premium service.
I'm aggressively exploring other options, and it's only a matter of if -- not when, one surfaces.
Can we not like, just apply a patch? Or will anthropic be mad if I run their client with my own patch?
Nix makes it easy to package up esotheric patches reliably and reproducibly, claude lowers the cost of creating such patches, the only roadblocks Inforesee are legal.
We opensourced our claude code ui today: https://github.com/bearlyai/openade
I wanted a terminal feel (dense/sharp) + being able to comment directly on plans and outputs. It's MIT, no cloud, all local, etc.
It includes all the details for function runs and some other nice to haves, fully built on claude code.
Particularly we found planning + commenting up front reduces a lot of slop. Opus 4.6 class models are really good at executing an existing plan down to a T. So quality becomes a function of how much you invest in the plan.
Just use pi, love it!
This comes up from time to time and although my experience is anecdotal, I see clear degradation of output when I run heavy loads (100s of batched/chunked requests, via an automated pipeline) and sometimes the difference in quality is absolutely laughable in how poor it is. This gets worse for me as I get closer to my (hourly, weekly) limits. I am Claude Max subscriber. There’s some shady stuff going on in the background, for sure, from my perspective and experience during my year or so of intense usage.
>Try using it for a few days. We've been using this internally at Anthropic for about a month now, and found that it took people a few days to mentally switch over to the new UI. Once they did, it "clicked" and they appreciated the reduced noise and focus on the tools that actually do need their attention.
Ah, the old "you're holding it wrong."
I don't feel as if any CLI editor has quite nailed UX yet
can't stand not seeing what exactly an ai agent is doing on my machine
I have been using it extensively, and for me it's fine as it is. Also, the title is just false. How did this get into HN frontpage, that's a good question.
> Read 3 fies (ctrl+o to expand)
What if you hit ctrl+o?
This "intervening" people are mentioning in these issues, does it stop the execution on the backend or just cause the client to stop listening to it?
another case of 'devs are out of touch with users basics needs and basic day-to-day usage of our app'
not getting dumbed down, ai is getting smarter than you at a speed faster than you can keep up or understand, have to abstract things and simplify so you can stay connected.
This is why I am a big fan of self-hosting, owning your data and using your own Agent. pi is a really good example. You can have your own tooling and can switch any SOTA model in a single interface. Very nice!
Exact same thing with Codex from 5.2 to 5.3.
There's no conspiracy, though, other than more tokens consumed = more money, and they want that.
My issue with CC is that its interface deliberately obscures the code from you, making you treat it more like a genie you make wishes of rather than making changes and checking the output.
I may not be up to date with the latest & greatest on how to code with AI, but I noticed that as opposed to my more human in the loop style,
At least now we also have a tracker: https://marginlab.ai/trackers/claude-code/
"This is as bad as it's going to be" turning out to be wrong
They could change course, obviously. But how does the saying go again -- it's easier for a camel to go through the eye of a needle, than for a VC funded tech startup to not enshittify.
Well, they already fucked over the community with their "lol not really unlimited" rug-pull.
For those of you who are still suckered in paying for it, why do you think the company would care how they abuse the existing users? You all took it the last time.
Quite frankly, most seasoned developers should be able to write their own Claude Code. You know your own algorithm for how you deal with lines of code, so it's just a matter of converting your own logic. Becoming dependent on Claude Code is a mistake (edit: I might be too heavy handed with this statement). If your coding agent isn't doing what you want, you need to be able to redesign it.
I've been on the other side of this as a PM, and it's tough because you can't always say what you want to, which is roughly: This product is used by a lot of users with a range of use cases. I understand this change has made it worse for you, and I'm genuinely sorry about that, but I'm making decisions with much more information than you have and many more stakeholders than just you.
> What majority? The change just shipped and the only response it got is people complaining.
I'll refer you to the old image of the airplane with red dots on it. The people who don't have a problem with it are not complaining.
> People explained, repeatedly, that they wanted one specific thing: file paths and search patterns inline. Not a firehose of debug output.
Same as above. The reality is there are lots of people whose ideal case would be lots of different things, and you're seeking out the people who feel the same as you. I'm not saying you're wrong and these people don't exist, but you have to recognize that just because hundreds or thousands or tens of thousands of people want something from a product that is used by millions does not make it the right decision to give that thing to all of the users.
> Across multiple GitHub issues opened for this, all comments are pretty much saying the same thing: give us back the file paths, or at minimum, give us a toggle.
This is a thing that people love to suggest - I want a feature but you're telling me other people don't? Fine, just add a toggle! Problem solved!
This is not a good solution! Every single toggle you add creates more product complexity. More configurations you have to QA when you deploy a new feature. Larger codebase. There are cases for a toggle, but there is also a cost for adding one. It's very frequently the right call by the PM to decline the toggle, even if it seems like such an obvious solution to the user.
> The developer’s response to that?
> I want to hear folks’ feedback on what’s missing from verbose mode to make it the right approach for your use case.
> Read that again. Thirty people say “revert the change or give us a toggle.” The answer is “let me make verbose mode work for you instead.”
Come on - you have to realize that thirty people do not in any way comprise a meaningful sample of Claude Code users. The fact that thirty people want something is not a compelling case.
I'm a little miffed by this post because I've dealt with folks like this, who expect me as a PM to have empathy for what they want yet can't even begin to considering having empathy for me or the other users of the product.
> Fucking verbose mode.
Don't do this. Don't use profanity and talk to the person on the other side of this like they're an idiot because they're not doing what you want. It's childish.
You pay $20/month or maybe $100/month or maybe even $200/month. None of those amounts entitles you to demand features. You've made your suggestion and the people at Anthropic have clearly listened but made a different decision. You don't like it? You don't have to use the product.
I really hate this change. I had just given a demo about how Claude Code helped me learn some things by showing exactly what it was doing, and now it doesn't do that any more. So frustrating.
This is the end game I've been Casandra'ing since the beginning.
You all are refining these models through their use, and the model owners will be the only ones with access to true models while you will be fed whatever degraded slop they give you.
You all are helping concentrate even more power in these sociopaths.
I've never heard of such a brutal and shocking injustice that I cared so little about! - Zapp
I mean I get it I guess but I'm not nearly so passionate as anyone saying things about this
Add another LLM to extract paths from verbose mode...
[dead]
[dead]
[dead]
As a heavy CC user, I appreciate a cleaner console output. If you really need to know which 3 files CC read, AI-assisted coding agents might not be for you.
Just stop using the damn thing if you don't like it.
Developers are just complainers.
Am I mistaken or is Claude Code essentially an opt-in rootkit?
Here's my honest take on this:
You're mass-producing outrage out of a UX disagreement about default verbosity levels in a CLI tool.
Let's walk through what actually happened: a team shipped a change that collapsed file paths into summary lines by default. Some users didn't like it. They opened issues. The developers engaged, explained their reasoning, and started iterating on verbose mode to find a middle ground. That's called a normal software development feedback loop.
Now let's walk through what you turned it into: a persecution narrative complete with profanity, sarcasm, a Super Bowl ad callback, and the implication that Anthropic is "hiding what it's doing with your codebase" — as if there's malice behind a display preference change.
A few specific points:
The "what majority?" line is nonsense. GitHub issues are a self-selecting sample of people with complaints. The users who found it cleaner didn't open an issue titled "thanks, this is fine." That's how feedback channels work everywhere. You know this.
"Pinning to 2.1.19" is your right. Software gives you version control. Use it. That's not the dramatic stand you think it is.
The developers responding with "help us understand what verbose mode is missing" is them trying to solve the problem without a full revert. You can disagree with the approach, but framing genuine engagement as contempt is dishonest.
A config toggle might be the right answer. It might ship next week. But the entitlement on display here isn't "give us a toggle" — it's "give us a toggle now, exactly as we specified, and if you try any other approach first, you're disrespecting us." That's not feedback. That's a tantrum dressed up as advocacy.
You're paying $200/month for a tool that is under active development, with developers who are visibly responding to issues within days. If that feels like disrespect to you, you have a calibration problem.
With kind regards, Opus 4.6
Hey, Boris from the Claude Code team here. I wanted to take a sec to explain the context for this change.
One of the hard things about building a product on an LLM is that the model frequently changes underneath you. Since we introduced Claude Code almost a year ago, Claude has gotten more intelligent, it runs for longer periods of time, and it is able to more agentically use more tools. This is one of the magical things about building on models, and also one of the things that makes it very hard. There's always a feeling that the model is outpacing what any given product is able to offer (ie. product overhang). We try very hard to keep up, and to deliver a UX that lets people experience the model in a way that is raw and low level, and maximally useful at the same time.
In particular, as agent trajectories get longer, the average conversation has more and more tool calls. When we released Claude Code, Sonnet 3.5 was able to run unattended for less than 30 seconds at a time before going off the rails; now, Opus 4.6 1-shots much of my code, often running for minutes, hours, and days at a time.
The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow. We want to make sure every user has a good experience, no matter what terminal they are using. This is important to us, because we want Claude Code to work everywhere, on any terminal, any OS, any environment.
Users give the model a prompt, and don't want to drown in a sea of log output in order to pick out what matters: specific tool calls, file edits, and so on, depending on the use case. From a design POV, this is a balance: we want to show you the most relevant information, while giving you a way to see more details when useful (ie. progressive disclosure). Over time, as the model continues to get more capable -- so trajectories become more correct on average -- and as conversations become even longer, we need to manage the amount of information we present in the default view to keep it from feeling overwhelming.
When we started Claude Code, it was just a few of us using it. Now, a large number of engineers rely on Claude Code to get their work done every day. We can no longer design for ourselves, and we rely heavily on community feedback to co-design the right experience. We cannot build the right things without that feedback. Yoshi rightly called out that often this iteration happens in the open. In this case in particular, we approached it intentionally, and dogfooded it internally for over a month to get the UX just right before releasing it; this resulted in an experience that most users preferred.
But we missed the mark for a subset of our users. To improve it, I went back and forth in the issue to understand what issues people were hitting with the new design, and shipped multiple rounds of changes to arrive at a good UX. We've built in the open in this way before, eg. when we iterated on the spinner UX, the todos tool UX, and for many other areas. We always want to hear from users so that we can make the product better.
The specific remaining issue Yoshi called out is reasonable. PR incoming in the next release to improve subagent output (I should have responded to the issue earlier, that's my miss).
Yoshi and others -- please keep the feedback coming. We want to hear it, and we genuinely want to improve the product in a way that gives great defaults for the majority of users, while being extremely hackable and customizable for everyone else.