You are not a dinosaur. I would argue that the great majority of engineers at our org do it the 'old fashioned' way.
My own experience with LLM-based coding has been wasted hours of reading incorrect code for junior-dev-grade tasks, despite multiple rounds of "this is syntactically incorrect, you cannot do this, please re-evaluate based on this information" "Yes you are right, I have re-evaluated it based on your feedback" only to do the same thing again. My time would have been better spent either 1) doing this largely boilerplate task myself, or 2) assigning and mentoring a junior dev to do it, as they would only have required maybe one round of iteration.
Based on my experience with other abstraction technologies like ORMs, I look forward to my systems being absolutely flooded with nonperformant garbage merged by people who don't understand either what they are doing, or what they are asking to be done.
I see new engineers adopting AI much faster than the older ones who have been doing all the coding themselves. I very often see senior engineers turing of their copilot after a week out of frustration because it doesnt work the way they want them to, but they arent even trying, they expect it to work 100% in their first try I guess. They spend months learning new technologies to best of their ability but they wont give AI a chance? They think using AI will make them less skilled, but it is not true, it will make them more productive.
My company recently made it mandatory to use Cursor and my motivation has cratered
I'm looking into alternatives because I have zero interest in having LLM tools dictated to me because some MBA exec is sold on the hype
I find it impossible to get into flow with the autocomplete interrupting me constantly and the code they generate in the chat node sucks
"Am I a dinosaur?" - I think you're asking the most important question for our craft in 2025. Thank you.
I lead a team building Markhub, an AI-native workspace, and we have this debate internally all the time. Our conclusion is that there are two types of "thinking" in programming:
"Architectural Thinking": This is the joy you're talking about. The deep, satisfying process of designing systems, building mental models, and solving a core problem. This is the creative work, and an AI getting in the way of this feels terrible. We agree that this part should be protected.
"Translational Thinking": This is the boring, repetitive work. Turning a clear idea into boilerplate code, writing repetitive test cases, summarizing a long thread of feedback into a list of tasks, or refactoring code. This is the work we want to delegate.
Our philosophy is that AI should not replace Architectural Thinking; it should eliminate Translational Thinking so that we have more time for the joyful, deep work.
For your mental model problem, our solution has been to use our AI, MAKi, not to write the core logic, but to summarize the context around the logic. For example, after a long discussion about a new feature, I ask MAKi to "summarize this conversation and extract the action items." The AI handles the "what," freeing me up to focus on the "how."
You are not a dinosaur. You are protecting the part of the work that matters most.
Agree that it is frustrating and not as satisfying to work using LLM's, I found myself on a plane recently without internet and it was great coding with no LLM access. I feel like we will slowly figure out how to use them in a reasonable way and it will likely involve doing smaller and more modular work, I disabled all tab auto suggestions because I noticed they throw me off track all the time.
What do you want to hear about? Doing things the same old way continues to work in the same old way. I may be a dinosaur, but I hear La Brea is nice this time of year.
I've tried new things occasionally, and I keep going back to a text editor and a shell window to run something like Make. It's probably not the most efficient process, but it works for everything and there's value in that. I have no interest in a tool that will generate lots of code for me that may or may not be correct and I'll have to go through with a fine tooth comb to see; I can personally generate lots of code that may or may not be correct, and if that fails, I have run some projects as copy-paste snippets from stack overflow until it works; it's not my idea of a good time, but I think it was better than spending the time to understand the many layers of OSX when all I wanted to do was get a pixel value from a point on the screen into applescript and I didn't want to do any other OSX ever (and I haven't).
A very large majority of the devs that I know and work with are still doing it the old way, or at least 90% the old way.
I don’t use LLMs either. I find them unethical and cumbersome.
I have been thinking of writing ebooks on Retrocomputing Legacy Software like PowerBASIC 3.5, etc. Run them in DOSBOX/X and create DOS programs. People still use DOS but have no idea how to write programs. This was way before LLMs came out.
Most of the engineers I know played around with LLMs but are still doing their work without one. Myself, I sometimes pop in to Gemini webapp to ask a question if search isn't going well, and it helps about 25% of the time.
I used Copilot for about a week before turning it off out of frustration; immensely distracting, and about 50% of what it wanted to autocomplete was simply wrong.
Yes, because AI can really ruin your design philosophy for your approached to problem that's being solved for a decade, and you are trying different way.
I do primarily when I'm refactoring something. In those scenarios, I know exactly what I want to change and the outcome is code in a style that I feel is most understandable. I don't need anything suggesting changes (I actually don't have tab completion enabled by default, I find it too distracting but that is a different topic) because the changes already exist in my head.
I've been without work for over a year now so I'm still programming the classic way and using ai chats in the browser. When I'll work again I'll use them. I think the best thing to do is separate programming for work and pleasure.
I feel the same way. Vibe coding has taken away the joy of programming for me, but there’s no denying that it has indeed improved my efficiency. So now, it depends on the situation—if it’s just for fun, I’ll code it myself.
Our company forbids ai, although I see my manager frequently popping into chatgpt for syntax stuff and I lowkey use google search AI functionality to bypass that req (not brazen enough to just use gpt)
There are dozens of us!
Using an LLM to directly generate code makes writing code feel like reviewing code, and thereby kills the joy in solving problems with software. I don't think people trying to learn are doing themselves any favors either.
I work with grad students who write a lot of code to analyze data. There is an obvious divide in comprehension between those who genuinely write their own programs vs those who use LLMs for bulk code generation. Whether that is correlation or causation is of course debatable.
In one sense, blindly copying from an LLM is just the new version of blindly copying from stack overflow and forum posts, and it seems to about be the same fraction of people either way. There isn't much harm in reproducing boilerplate that's already searchable online, but in that situation it puts orders of magnitude less carbon in the atmosphere to just search for it traditionally.
If you mean "AI" in the sense of reasoning LLM, than it is generally prohibited given the industrial scale plagiarism, security leaks, and logical inaccuracies.
For the philosophical insights into ethics... we may turn to fiction =3
I am a part-time coder, in that I get paid for coding and some of my code is actually used in production. I don't use LLMs or any AI in my coding, whatsoever. I've never tried LLM or AI coding, and I never will, guaranteed. I hate AI.
I agree with you, 100%. I like typing out code by hand. I like referring to the Python docs and I like the feeling of slowly putting code together and figuring out the building blocks, one by one. In my mind, AI is about efficiency for the sake of efficiency, not for the sake of enjoyment, and I enjoy programming,
Furthermore, I think AI embodies the model of the human being as a narrowly-scoped tool who gets converted from creator into a replaceable component, whose only job is to provide conceptual input into design. Sound good at first ("computers do the boring stuff, humans do the creative stuff"), but, and it's a big but: as an artist too, I think it's absolutely true that the creative stuff can't be separated from the "boring" stuff, and when looked at properly, the "boring" stuff can actually become serene.
I know there's always the counterpoint: what about other automations? Well, I think there is a limit past which automations give diminishing returns and become counterproductive, and therefore we need to be aware of all automations, but AI is the first sort of automation that is categorically always past the point of diminishing returns, because it targets exactly the sort of cognitive features that we should be doing ourselves.
Most people here disagree with me, and frequently downvote me too on the topic of AI. But I'll say this: in a world where efficiency and productivity has become doctrine, most people have also been converted into only thinking about the advancement of the machine, and have lost the essence of soul to enjoy that which is beyond mere mental performance.
Sadly, people in the tecnhnical domain often find emotional satisfaction in new tools, and that is why anything beyond the technical is often derided by those in tech, much to their disadvantage.
I don't use any AI code editor. Not because it isn't useful, but the user experience of using it is so bad. I typically already have the solution at hand - I don't need an AI to give me an answer, I need it to implement the solution I have.
But not using AI is also idiotic right now, at the very least you should be using it for autocomplete, in the _vast_ majority of cases any current leading LLM will return _far more_ than not using it (in the scope of autocomplete).
Surely you don't find writing boilerplate fun though?
Coding agents still give you control (at least for now), but are like having really good autocomplete. Instead of using copilot to complete a line or two, using something like Cursor you can generate a whole function or class based on your spec then you can refine and tweak the more nuanced and important bits where necessary.
For example, I was doing some UI stuff the other day and in the past it would have taken a while just to get a basic page layout together when you're writing it yourself, but with a coding assistant I generated a basic page asking it to use an image mock up, a component library and some other pages as references. Then I could get on and do the fun bits of building the more novel parts of the UI.
I mean if it's code you're working on for fun then work however you like, but I don't know why someone would employ a dev working in such an inefficient way in 2025.
More AI hype.
Yes.
We've mostly banned the use of AI coding assistants, with exception of certain uses, for junior level devs/engineers. Essentially, they need to demonstrate that their use case fits with what LLMs are good at (ie. for in-distribution, tedious, verifiable tasks).
Annecdotally, what we've found was that those using AI assistants show superficial improvements in productivity early, but they learn at a much slower rate and their understanding of the systems is fuzzy. It leads to lots of problems down the road. Senior folks are also susceptible to these effects, but at a lower level. We think it's because most of their experiences are old fashioned "natty" coding.
In a way, I think programmers need to do natty coding to train their brains before augmenting/amputating it with AI.