Why I Won't Use AI

  • The part about productivity also remind me that we still often pay for handmade goods, despite them being often a common industrialized good.

    For example, if you enjoy cooking, or it is your job, you might be willing to pay for an artisan knife, even though you can buy a good knife for a few bucks. Same with clothes. They are extremely industrialized, but there is still a lot of tailor living of making bespoke clothe.

    We might do it for no other reason than an appreciation of the craft, but a lot of time, it is driven by a desire for high quality (and/or customization).

    This makes me wonder if one day we will see artisan software developer (I mean, the idea of software craftmanship is already here). LLM&Co are good at outputting a lot of code very quickly, but they are often not good at producing quality code. And I sincerely doubt that it will get any better, this seems to be more or less a consequence of the core technology making LLM. So unless we have a significant paradigm shift, I don't think it will improve much more. It already feels like we reach the point of diminishing return on this specific tech.

    So what about making smaller software, but better software, for client wanting nothing but the upmost quality ? Just like bladesmith, we will have a bunch of new fancy tool at our disposal to be able to code better and faster, but the whole point of the exercice will still to be in control of the process and not give all the decision power to a machine.

  • The author makes a strong case against the current use of AI, and I agree that today’s tools can’t replace the deep thinking, creativity, and intuition that good programming requires. At best, they’re sophisticated parrots, useful in some ways, but fundamentally lacking understanding, maybe this will change down the road.

    That said, there’s another angle worth considering. AI has introduced a new kind of labor: prompt engineers.

    These aren’t traditional programmers, but they interact with models to produce code-like output that resembles the work of software developers. It’s not perfect, and it raises plenty of concerns, but it does represent a shift in how we think about code generation and labor.

    Regardless of which side of the fence you're on, I think we can all agree that this paradigm shift is happening, and arguments like the authors raise valid and important concerns.

    At the same time, there are also compelling reasons some people choose to embrace AI tools.

    In my opinion, the most crucial piece of all this is government policy. Policymakers need to get more involved to ensure we're prepared for this fast-moving and labor-disruptive technology.

    Just my two cents and thanks for sharing.

  • > I enjoy learning. And learning is a difficult chore. There is no golden road to knowledge. I am constantly frustrated when my assumptions and theories are thwarted. I can’t get the program to compile. The design of this system I was sure was right is missing a key assumption I missed. I bang my head against the wall, I read references, and I ask people questions. I try a different approach and I verify my results. Finally, eventually, the problem gives way and the solution comes out.

    > It is during the struggle that I learn the most and improve the most as a programmer. It is this struggle I crave. It is what I enjoy about programming.

    This explains the whole post to me. First of all this is an area where using an AI can streamline design considerations before getting to the head-banging-walls. Since this is an aspect that the writer enjoys, there's no saving them. They decided that they like things as they are without AI. The rest of all the cited reasons are post-decision justifications.

  • I tried using AI last week for some lazy research. The results were highly questionable. I asked it to cite sources. It did but those sources were even more questionable. AI has no concept of objective reality. Sometimes it is glaringlyobvious that it's just scraping 4chan and facebook and barfing up trash.

  • This reminds me of a convo I had a few days ago with my mid-20s daughter, who is a dev at a large successful tech company. When I asked her how the AI rollout was going, she said she hated it, for 2 reasons:

    1) it's being crammed down their throats from "up high" without real thought being put into it, more like "AI everything" is some kind of executive mantra; that is a common refrain in companies

    2) AI is taking away the aspect of the job she enjoys -- and the reason she switched to dev in the first place (her degree is in chemE) -- which is to write code, and replacing it with the aspect of the job she dislikes, which is PQA. So now instead of being a developer -- which she worked hard to get to and is quite good at, she's being reduced to a QA person, going over agent-generated code (generated by her or more likely, others on her team; she's one of the senior devs). It's sapping her creativity and inspiration, and pretty soon she's just going to be phoning it in. It's a shame. It saddened me to hear this and makes me think how this might affect society in a negative way. It's not that AI itself is the core problem, but the way that companies are "implementing" it, _is_ a problem.

    3) AI doesn't actually properly do the actual mundane and time-consuming-but-soul-sucking tasks that she would like to offload to AI. This has more to do with how it's integrated into the company, their code base, etc. It's like people who say "I want AI to do my dishes so I can write letters instead of writing my letters so I can do the dishes."

    Some people just want to see results and don't care about the process. So writing an LLM prompt or figuring out the code, is the same to them. But for others, the journey is the goal. It's like how some people still want to craft furniture by hand when they could just get a machine to spit it out.

  • I did CS and have been writing code for 20+ years. For the past two years, helping clients adopt LLMs into their business logic. It has lots of good use cases.

    But, I can recognize when it makes mistakes due to my years of manual learning and development. If someone uses LLMs through their entire education and life experience, will they be able to spot the errors? Or will they just keep trying different prompts until it works? (and not know why it works)

    Its like the auto-correct spell checker. I can't spell lots of words, I can just get close enough to the right spelling until it fixes it for me. I am a bit nervous about having the same handicap with LLMs taking that away from me in: thinking, logic and code.

    Fully aware I can be dismissed as a dinosaur.

  • > they were protesting the fact that capital owners were extracting the wealth from their labour with this new technology and weren’t reinvesting it to protect the labourers displaced by it.

    This capital versus labor dynamic is very common and an interesting way to frame things. But suppose you do take the view that all the wealth accrues to capital. What are the implications?

    One implication would be to skip college, take that money and invest it in the stock market. Why invest in labor when capital grows faster? Although I don't think anyone with this mindset would offer that advice, but rather dwell in the fact that they are laborers by design with no hope of ever breaking that. Sure enough:

    > I’m a labourer. A well-compensated one with plenty of bargaining power, for now. I don’t make my living profiting from capital. I have to sell my time, body, and expertise like everyone else in order to make the profits needed to support me and my family with life’s necessities.

    Another point, in regards to productivity

    > Did you know there’s no conclusive evidence that static type checking has any effect on developer productivity?

    We don't need "conclusive evidence" for everything. You see this a lot with a lot of ridiculous claims. I don't need some laboratory controlled environment to prove that static type checking is more productive. How do I know? Because I've used statically typed and non-statically typed language and I'm more productive in statically typed. I remember the first time I introduced Flow, a static type checker to my javascript project and the amount of errors I had was really mind-boggling.

    A lot of people agree with me and statically typed languages are popular and dynamically typed languages like Python are constantly adding typing tooling. It's a test of the market. People like these tools, presumably because they're making them more productive. Even if they're not, people like using them for whatever reason and that translates to something, right?

    This scientism around everything is exhausting.

  • Yes, I do agree on most of AI's side effects, but it's already cheap, accurate and reliable enough to be used with a fine-tuned context/pretrained model + tooling (web and local db search in particular) at various monotonous applications without breaking a bank. A human can intervene to guide, add extra context, or even continue working from what an AI model has come up with.

    It's not about "good" or "bad", we have to live with it. And as always, those who don't adapt - vanish.

  • Lots of nice thoughts that I agree with. But there is a lot of value creation in AI as well, beyond building things.

    For example, how can doctors save time and spend more time one-on-one with patients? Automate the time-consuming, business-y tasks and that’s a win. Not job loss but potential quality of life improvement for the doctors! And there are many understaffed industries.

    The balancing point will be reached. For now we are in early stages. I’m guessing it’ll take at least a decade or two for THE major breakthrough—whatever that may be. :)

  • What's up with all the flagging these days? This shouldn't be flagged.

  • Why was this flagged?

  • The first part about labor is a bit incoherent. It bemoans the accumulation of wealth by the capital class, then the part on profitability claims that the capital class is actually losing by investing in AI. It also seems to imply that laborers should be entitled to demand for their services. Imagine if your car mechanic wanted to charge you no matter whether your car needed work or not. The lump of labor fallacy is alive and well.

    Some good points in the rest of the post.

  • "Why I won't use moveable type."

    shrug

  • Some people are just committed to the idea that using AI is bad. This article didn't succeed in presenting a good argument as to why.

  • A Software Engineer Luddite is an interesting person.

  • Its not our decision to make we are powerless

  • Muddled thinking galore. This write-up is just to scream into the void out of frustration... On Luddites: "They were protesting the fact that capital owners were extracting the wealth from their labour." This makes no sense, and it would be true before and after the mechanized looms. They were protesting because they were out of a job.

    "That wealth is going into the hands of the ultra-wealthy." It's going to the people that someone want to give their money to. I hate Microsoft with a passion, but I don't think Bill Gates went around stealing money from poor people to become rich; he got it from businesses and other high-wealth individuals. And if you really cared that much about large corporations becoming rich, you should use the hell out of their large services since they are reported to actually be losing money on heavy users.

    And if you don't want to give anybody any money, just don't give them any. Use a free open-weight/source model.

    On Productivity - the author claims there is no increase in productivity. I am strongly starting to believe that the people who are unable to increase their productivity with AI are those who are EXTREMELY rigid and score low on creativity. Even if all you ever wanted to do was fiddle around with some extremely esoteric, complex area of software development, like microkernel optimization or something like that, and even if you were in a position where an AI was completely and truly useless in assisting you in that work due to its cutting-edge and extremely esoteric nature, a person's inability to utilize AI to be more productive IS STILL baffling. YES, it might not be helpful for the thing you spent 25 years becoming an expert on. But are there truly NO other frameworks, tools, software patterns, or utility functions in the world that you could use EVER FOR ANYTHING to assist you? What about having the AI throw together a tool that scans through an old PDF with OCR and extracts specific topic information that you need for something? Or a simple webpage to host a sign-up form for an event you are organizing for your workplace? Never mind the specific examples, but you DON'T know even a fraction of what is out there, and not even having the interest and ability to use AI to make something simple but useful, that are just beyond your capabilities, but easy for an AI agent, that can assist you in you, shows a massive lack of creativity.

    With regards to productivity, I am a CTO of a small firm with about 8 people in our department. But holy hell, it's obvious that it's helpful with AI-assisted coding (If you are allowed to give anecdotes, so am I).

    On Enjoyment: The argument seems to be Programming challenges drive the author's growth, with a focus on refactoring and simplifying code. They criticize AI tools for lacking human understanding and find rote coding unengaging but essential for learning patterns and improving skills. Fine. Then do that, but either admit that you are self-centered and not primarily focused on creating value for others. Find a job where you can do what you want, or work for yourself. But don't whine at others if they expect to get usefulness out of you when you want money from them.

    On Ethical Training Do you apply the same level of ethical demand to all areas of your life, or do you just suddenly focus on it when it regarding "your thing"? Pharmaceutical companies jack up drug prices, hoard patents to block generics, and let people die for profit—literally life-and-death stakes. Oil giants lobby to keep pumping carbon while the planet chokes. Private equity firms gut healthcare systems, firing nurses to pad their margins. Every car uses cobalt from the DRC from child labor. I am willing to bet that most people railing against AI are happily buying modern electronics, driving a car, buying Viagra instead of losing weight, and overconsuming at a rate that dwarfs most of the people on the earth. Those are direct, tangible harms—way worse than some AI bot cribbing your code snippets. But nah, you’re here writing manifestos about LLMs. So why the selective outrage? Because it annoys you that people are using modern tools, because you’d rather the world just stick with an axe for wood chopping, because that’s what you know and enjoy.

    On the Environmental Impact: See above. And - are you this worried about the environment when you shop for houses, cars, and other stuff you don't need? Or is it just when the environment suffers AND it's related to something you don't like?

    There is no singularity: So what? Should I not have used an AI agent to help me create an XML-to-JSON parser for a custom application just 1 hour ago because there is no singularity?

    What Makes Us Better Programmers? What if you become a worse programmer because you are not branching out with AI to utilize new tools and functionality? What if you are prevented from deeper learning because you spend too much time on small details? You could spend your whole life studying and asymptotically improving in Chess. But don't DEMAND that people automatically recruit you for that.

    It's easy to demonize firms. But I am a small CTO in a small health software system. We genuinely try to make psychologists' lives easier and more productive with the stuff we make. We don't make much, and it's really hard sometimes. But I think we are making a difference to some people. And to be judged by people when I say that we should expect productivity and usefulness from the people we hire pisses me off to no extent. We don't have enough money to pay you to fiddle around with what YOU WANT. People, use all the tools we have to create something useful!

    And by the way, I have also been a clinical psychologist for many years, so I can say with some experience; The whole post by the author can be summarized as a tour de force of motivated reasoning.

    /rant

  • [flagged]

  • [flagged]