Amen! The funny thing is this sounds like a “get off my lawn!” rant, and yet it’s so true if you’ve been around long enough to have seen a few tech cycles. But this also only seems to matter if you are NOT in a “move fast and break stuff” environment.
Ironically the more you strip out the unnecessary layers of abstractions and frameworks the less likely you are to need to use a debugger to trace code as once you know the bug exists you can just see the cause in the source code - bugs you haven't encountered yet are still completely hidden though :D
Abstractions would be fine, if they were solid code. Code that I don't have to write, that I don't have to debug, that I don't have to maintain, is wonderful - if it works.
The problem isn't abstractions. The problem is that too many of the abstractions don't work - they have bugs. (Yes, I know, all code has bugs. I even found a bug in the STL once. Still, that's one bug I have seen in it in 30 years. There are other abstractions that are... less solid.)
Everyone knows it, yet nobody would waste 5 min of their time writing a new function if he/she can import it instead, along with another 50 dependencies.
On a personal anecdote, the spaghetti code problem has been thorougly fixed. It's been replaced with much worse spaghetti microservices.
The market figured out it's cheaper to hire more developers and pile on more crap than to make existing systems more observable.
I still yearn for the days where I could run the whole application in my IDE and step through it with a debugger line by line.
Or even just insert print("foo") like the barbarian I am.
Now I can't attache the IDE because everything is 50 microservices. I can't print-debug because the logging system requires fully formed JSON output and it's clustered and distributed and billed by line and I need to use a Web UI to see my own application's logs ffs.
Get off my lawn.
Couldn't agree more. And you didn't even touch on the pain and misery caused when an amateur hour shop decides to do micro services like big tech.
This screed reads like someone who doesn't do testing. Or who only does end-to-end tests.
With both decent testing and decent monitoring/logging/o11y at each layer, no one should ever have to debug the whole stack at once and can instead focus on the layer with the problem.
I wish we could put the old days of having a tightly-coupled stack of untested cruft behind us.
I find it a bit amusing to think of the evolution of things, in basic terms... within a program, we have these things called "functions", which have "prototypes" / definitions that define the number and types of arguments (using the C terminology).
We didn't always have those things... in the bad old days, there were just "subroutines" and gotos (without a call stack), and functions in C (pre-C89) with no prototype that you could just call with whatever arguments you felt like... and if the caller and callee agreed on the types, great, if not... maybe it worked, or maybe some crash, who knows, depending on the nature of the argument mismatch (too many vs too few, the types involved, etc).
So, prototypes and functions with a stack were invented to solve a problem, and meant that if the program compiled and linked, then at least all the callers and callees agreed on the prototypes and argument types, and things might work, and if not, then there would be a core dump or a stack trace where we could look at the stack and that would tell us quite a lot about the system/program execution state.
Fast forward to today... and now we throw a lot of those inventions out the window and use "microservices" or "REST" or JSON etc, where there is no prototype that the caller and callee agree on, it's all just unstructured or semi-structured (like the bad old, pre-prototype days in C...), and if the caller and callee don't agree on some vague notion of what the parameters are then chaos ensues, and there is no one place to look at to debug (like a core or stack) because the system state is now spread across many machines and so there are lots of logs to correlate (at best).
A lot of people even describe this as a selling point... "yeah, it's great that we have microservices and loose coupling so we can upgrade the different parts separately!". If a strict schema is in use, like (like xsd, wsdl, protobufs, etc), then it can almost work, because so long as everybody agrees on the schema then sure, the individual parts can be upgraded separately. Oh, but... then how to we change the schema? Oops, now everything needs to be re-released all at once and we're back to where we started.
If the schema/protocol never changes, then it can work... and indeed that's what IP, HTTP, etc., are, they are set in stone for decades and then the clients/servers can change and that's fine, but if you have an elaborate distributed system with either a loose schema, or a strict (and therefore necessarily changing often) schema, which are the popular choices, then you're screwed.
At least in the classical analogy with C/C++/Java whatever, we admit to ourselves that if the function prototypes, arguments, etc. need to change, then that's a recompile, relink, and restart the whole system, not just some parts of it.
Stop noticing things!
Fact brother.
Yep. Ongoing problem for decades. And now with “AI” to make it more ridiculous. I look forward to retiring soon and never looking at code again.