There's some whistling past the graveyard in these comments. "You still need humans for the social element...", "LLMs are bad at debugging", "LLMs lead you astray". And yeah, there's lots of truth in those assertions, but since I started playing with LLMs to generate code a couple of years ago they've made huge strides. I suspect that over the next couple of years the improvements won't be quite as large (Pareto Principle), but I do expect we'll still see some improvement.
Was on r/fpga recently and mentioned that I had had a lot of success recently in getting LLMs to code up first-cut testbenches that allow you to simulate your FPGA/HDL design a lot quicker than if you were to write those testbenches yourself and my comment was met with lots of derision. But they hadn't even given it a try to form their conclusion that it just couldn't work.
Really good coders (like him) are better.
Mediocre ones … maybe not so much.
When I worked for a Japanese optical company, we had a Japanese engineer, who was a whiz. I remember him coming over from Japan, and fixing some really hairy communication bus issues. He actually quit the company, a bit after that, at a very young age, and was hired back as a contractor; which was unheard of, in those days.
He was still working for them, as a remote contractor, at least 25 years later. He was always on the “tiger teams.”
He did awesome assembly. I remember when the PowerPC came out, and “Assembly Considered Harmful,” was the conventional wisdom, because of pipelining, out-of-order instructions, and precaching, and all that.
His assembly consistently blew the doors off anything the compiler did. Like, by orders of magnitude.
The thing everyone forgets when talking about LLMs replacing coders is that there is much more to software engineering than writing code, in fact that's probably one of the smaller aspects of the job.
One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.
If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
“Better” is always task-dependent. LLMs are already far better than me (and most devs I’d imagine) at rote things like getting CSS syntax right for a desired effect, or remembering the right way to invoke a popular library (e.g. fetch)
These little side quests used to eat a lot of my time and I’m happy to have a tool that can do these almost instantly.
Companies that leverage LLMs and AIs to let their employees be more productive will thrive.
Companies that try to replace their employees with LLMs and AIs will fail.
Unfortunately, all that's in the long run. In the near term, some CEOs and management teams will profit from the short term valuations as they squander their companies' future growth on short-sighted staff cuts.
It is quite heartening to see so many people care about "good code". I fear it will make no difference.
The problem is that the software world got eaten up by the business world many years ago. I'm not sure at what point exactly, or if the writing was already on the wall when Bill Gates' wrote his open letter to hobbyists in 1976.
The question is whether shareholders and managers will accept less good code. I don't see how it would be logical to expect anything else, as long as profit lines go up why would they care.
Short of some sort of cultural pushback from developers or users, we're cooked, as the youth say.
All the world's smartest minds are racing towards replacing themselves. As programmers, we should take note and see where the wind is blowing. At least don't discard the possibility and rather be prepared for the future. Not to sound like a tin-foil hat but odds of achieving something like this increase by the day.
In the long term (post AGI), the only safe white-collar jobs would be those built on data which is not public i.e. extremely proprietary (e.g. Defense, Finance) and even those will rely heavily on customized AIs.
The context required to write real software is just way too big for LLMs. Software is the business, codified. How is an LLM supposed to know about all the rules in all the departments plus all the special agreements promised to customers by the sales team?
Right now the scope of what an LLM can solve is pretty generic and focused. Anytime more than a class or two is involved or if the code base is more than 20 or 30 files, then even the best LLMs start to stray and lose focus. They can't seem to keep a train of thought which leads to churning way too much code.
If LLMs are going to replace real developers, they will need to accept significantly more context, they will need a way to gather context from the business at large, and some way to persist a train of thought across the life of a codebase.
I'll start to get nervous when these problems are close to being solved.
Last night I spent hours fighting o3.
I never made a Dockerfile in my life, so I thought it would be faster just getting o3 to point to the GitHub repo and let it figure out, rather than me reading the docs and building it myself.
I spent hours debugging the file it gave me... It kept on adding hallucinations for things that didn't exist, and removing/rewriting other parts, and other big mistakes like understanding the difference between python3 and python and the intricacies with that.
Finally I gave up and Googled some docs instead. Fixed my file in minutes and was able to jump into the container and debug the rest of the issues. AI is great, but it's not a tool to end all. You still need someone who is awake at the wheel.
From my limited experience, former coder now management but I still get to code now and then. I've found them helpful but also intrusive. Sometimes when it guesses the code for the rest of the line and next few lines it's going down a path I don't want to go but I have to take time to scan it. Maybe it's a configuration issue, but i'd prefer it didn't put code directly in my way or be off by default and only show when I hit a key combo.
One thing I know is that I wouldn't ask an LLM to write an entire section of code or even a function without going in and reviewing.
Human coders are necessary because writing code is a political act of deciding between different trade-offs. antirez's whole post is explaining to Gemini what the trade-offs even were in the first place. No analysis of a codebase in isolation (i.e. without talking to the original coders, and without comments in the code) can distinguish between intentional prioritization of certain trade-offs or whether behavior is unintentional / written by a human in an imperfect way because they didn't know any better / buggy.
LLMs will never be able to figure out for themselves what your project's politics are and what trade-offs are supposed to be made. The penultimate model will still require a user to explain the trade-offs in a prompt.
In many cases developers are a low expectation commodity. In those cases I strongly believe humans are entirely replaceable by AI and I am saying that as somebody with an exceptionally low opinion of LLMs.
Honestly though, when that replacement comes there is no sympathy to be had. Many developers have brought this upon themselves. For roughly the 25 year period from 1995 to 2020 businesses have been trying to turn developers into mindless commodities that are straight forward to replace. Developers have overwhelmingly encouraged this and many still do. These are the people who hop employers every 2 years and cannot do their jobs without lying on their resumes or complete reliance on a favorite framework.
The main thing LLMs have helped me with, and always comes back to, tasks that require bootstrapping / Googling:
1) Starting simple codebases 2) Googling syntax 3) Writing bash scripts that utilize Unix commands whose arguments I have never bothered to learn in the first place.
I definitely find time savings with these, but the esoteric knowledge required to work on a 10+ year old codebase is simply too much for LLMs still, and the code alone doesn't provide enough context to do anything meaningful, or even faster than I would be able to do myself.
If an LLM just finds patterns, is it even possible for an LLM to be GOOD at anything? Doesn't that mean at best it will be average?
Unrelated to the LLM discussion, but a hash function function is the wrong construction for the accumulator solution. The hashing part increases the probability that A and B have a collision that leads to a false negative here. Instead, you want a random invertible mapping, which guarantees that no two pointers will "hash" to the same value, while distributing the bits. Splitmix64 is a nice one, and I believe the murmurhash3 finalizer is invertible, as well as some of the xorshift RNGs if you avoid the degenerate zero cycle.
We aren't expecting LLMs to come up with incredibly creative software designs right now, we are expecting them to execute conventional best practices based on common patterns. So it makes sense to me that it would not excel at the task that it was given here.
The whole thing seems like a pretty good example of collaboration between human and LLM tools.
The human ability to design computer programs through abstractions and solve creative problems like these is arguably more important than being able to crank out lines of code that perform specific tasks.
The programmer is an architect of logic and computers translate human modes of thought into instructions. These tools can imitate humans and produce code given certain tasks, typically by scraping existing code, but they can't replace that abstract level of human thought to design and build in the same way.
When these models are given greater functionality to not only output code but to build out entire projects given specifications, then the role of the human programmer must evolve.
I use LLMs a lot, and call me arrogant, but every time I see a developer saying that LLMs will substitute them, I think they are probably shitty developers.
I think this is true for deeply complex problems, but For everyday tasks an LLM is infinitely “better”.
And by better, I don’t mean in terms of code quality because ultimately that doesn’t matter for shipping code/products, as long as it works.
What does matter is speed. And an LLM speeds me up at least 10x.
Antirez is a top 0.001% coder . Don’t think this generalizes to human coders at large
I never quite understand these articles though. It's not about Humans vs. AI.
It's about Humans vs. Humans+AI
and 4/5, Humans+AI > Humans.
I have been evaluating LLMs for coding use in and out of a professional context. I’m forbidden to discuss the specifics regarding the clients/employers I’ve used them with due to NDAs, but my experience has been mostly the same as my private use - that they are marginally useful for less than one half of simple problem scenarios, and I have yet to find one that has been useful for any complex problem scenarios.
Neither of these issues is particularly damning on its own, as improvements to the technology could change this. However, the reason I have chosen to avoid them is unlikely to change; that they actively and rapidly reduce my own willingness for critical thinking. It’s not something I noticed immediately, but once Microsoft’s study showing the same conclusions came out, I evaluated some LLM programming tools again and found that I generally had a more difficult time thinking through problems during a session in which I attempted to rely on said tools.
Super hard problems are often solved by making strange weird connections derived from deep experience plus luck. Like finding the one right key in a pile of keys. The intuition you used to solve your problem IS probably beyond current agents. But, that too will change perhaps by harnessing the penchant of these systems to “hallucinate”? Or, some method or separate algorithm for dealing with super hard problems creatively and systematically. Recently, I was working on a hard imaging problem (for me) and remembered a bug I had inadvertently introduced and fixed a few days earlier. I was like wait a minute - because in that random bug I saw opportunity and was able to actually use the bug to solve my problem. I went back to my agent and it agreed that there was virtually no way it could have ever seen and solved the problem in that way. But that too will come. Rest assured.
If you care that much about having correct data you could just do a SHA-256 of the whole thing. Or an HMAC. It would probably be really fast. If you don’t care much you can just do murmur hash of the serialized data. You don’t really need to verify data structure properties if you know the serialized data is correct.
This post is a brilliant example of why human intuition still dominates when navigating ambiguity and crafting clever systems-level solutions. The XOR accumulator idea was smart—LLMs can help validate or iterate on such thoughts, but rarely originate them.
In my experience working on enterprise app modernization, we’ve found success by keeping humans firmly in the loop. Tools like Project Analyzer (from Techolution’s AppMod.AI suite) assist engineers in identifying risky legacy code, mapping dependencies, and prioritizing refactors. But the judgment calls, architecture tweaks, and creative problem-solving? Still very much a human job.
LLMs boost productivity, but it’s the developer's thinking that makes the outcome truly resilient. This story captures that balance perfectly.
There's also the subset of devs who are just bored, LLMs will end up as an easier StackOverflow and if the solution is not one script away, then you're back to square one. I already had a few of "well, uhm, chatGPT told me what you said basically".
LLMs can be very creative, when pushed. In order to find a creative solution, like antirez needed, there are several tricks I use:
Increase the temperature of the LLMs.
Ask several LLMs, each several time the same question, with tiny variations. Then collect all answers, and do a second/third round asking each LLM to review all collected answers and improve.
Add random constraints, one constraints per question. For example, to LLM: can you do this with 1 bit per X. Do this in O(n). Do this using linked lists only. Do this with only 1k memory. Do this while splitting the task to 1000 parallel threads, etc.
This usually kicks the LLM out of its confort zone, into creative solutions.
What do you mean "Still"? We've only had LLMs writing code for 1.5 years... at this rate it won't be long.
The funniest things a llm do to me is they fixed the unit test to pass instead of fixing the code. Basically until a llm can have embedded common sense knowledge, it is untrustable
I would say you thought about this solution because you are creative, something a computer will never be no matter how much data you throw at it.
How did it help, really? By telling you your idea was no good?
A less confident person might have given up because of the feedback.
I just can't understand why people are so excited about having an algorithm guessing for them. Is it the thrill when it finally gets something right?
The question is, for how long?
I'm wondering if this statement might be definitionally self-evident. In other words, the entire reason we write software is that it has value to ourselves and other humans - so we have to be involved in its specification. Computers do things faster, more accurately, and in some cases more creatively than human could. But in the end, what a computer produces is still for the benefit of humans and subject to all the human constraints. Aggregate human behavior determines if software is a success or not.
If software is about meeting human demands, humans will always write its requirements, by definition. If we build another machine like LLMs, well the design of those LLMs is subject to human demands. There is no point at which we can demand perfection but not be involved in its definition.
I suspect humans will always be critical to programming. Improved technology won't matter if the economics isn't there.
LLMs are great as assistants. Just today, Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises.
Automation killed tons of manufacturing jobs, and we're seeing something similar in programming, but keep in mind that the number of people still working in manufacturing is 60% of the peak, and those jobs are much better than the ones in the 1960s and 1970s.
Coders may want to look at translators for an idea of what might happen.
Translation software has been around for a couple of decades. It was pretty shitty. But about 10 years ago it started to get to the point where it could translate relatively accurately. However, it couldn't produce text that sounded like it was written by a human. A good translator (and there are plenty of bad ones) could easy outperform a machine. Their jobs were "safe".
I speak several languages quite well and used to do freelance translation work. I noticed that as the software got better, you'd start to see companies who instead of paying you to translate wanted to pay you less to "edit" or "proofread" a document pre-translated by machine. I never accepted such work because sometimes it took almost as much work as translating it from scratch, and secondly, I didn't want to do work where the focus wasn't on quality. But I saw the software steadily improving, and this was before ChatGPT, and I realized the writing was on the wall. So I decided not to become dependent on that for an income stream, and moved away from it.
When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages). Sure, it's not going to win any literary awards, but the vast vast majority of translation work out there is commercial, not literature.
Several things have happened: 1) there's very little translation work available compared to before, because now you can pay only a few people to double-check machine-generated translations (that are fairly good to start with); 2) many companies aren't using humans at all as the translations are "good enough" and a few mistakes won't matter that much; 3) the work that is available is high-volume and uninteresting, no longer a creative challenge (which is why I did it in the first place); 4) downward pressure on translation rates (which are typically per word), and 5) very talented translators (who are more like writers/artists) are still in demand for literary works or highly creative work (i.e., major marketing campaign), so the top 1% translators still have their jobs. Also more niche language pairs for which LLMs aren't trained will be safe.
It will continue to exist as a profession, but diminishing, until it'll eventually be a fraction of what it was 10 or 15 years ago.
(This is specifically translating written documents, not live interpreting which isn't affected by this trend, or at least not much.)
If you stick with the same software ecosystem long enough you will collect (and improve upon) ways of solving classes of problems. These are things you can more or less reproduce without thinking too much or else build libraries around. An LLM may or may not become superior at this sort of exercise at some point, and might or might not be able to reliably save me some time typing. But these are already the boring things about programming.
So much of it is exploratory, deciding how to solve a problem from a high level, in an understandable way that actually helps the person who it’s intended to help and fits within their constraints. Will an LLM one day be able to do all of that? And how much will it cost to compute? These are the questions we don’t know the answer to yet.
>Gemini was quite impressed about the idea
Like sex professionals, Gemini and co are made to be impressed and have possitive things to say about programming ideas you propose and find your questions "interesting", "deep", "great" and so.
There's something fundamental here.
There is a principle (I forget where I encountered it) that it is not code itself that is valuable, but the knowledge of a specific domain that an engineering team develops as they tackle a project. So code itself is a liability, but the domain knowledge is what is valuable. This makes sense to me and matched my long experience with software projects.
So, if we are entrusting coding to LLMs, how will that value develop? And if we want to use LLMs but at the same time develop the domain acumen, that means we would have to architects things and hand them over to LLMs to implement, thoroughly check what they produce, and generally guide them carefully. In that case they are not saving much time.
Working with Claude 4 and o3 recently shows me just how fundamentally LLMs haven't really solved the core problems such as hallucinations and weird refactors/patterns to force success (i.e. if account not found, fallback to account id 1).
The other day an LLM told me that in Python, you have to name your files the same as the class name, and that you can only have one class per file. So... yeah, let's replace the entire dev team with LLMs, what could go wrong?
I think will will increasingly be orchestrators. Like at a symphony. Previously, most humans were required to be on the floor playing the individual instruments, but now, with AI, everyone can be their own composer.
The number one use case for AI for me as a programmer is still help finding functions which are named something I didn't expect as I'm learning a new language/framework/library.
Doing the actual thinking is generally not the part I need too much help with. Though it can replace googling info in domains I'm less familiar with. The thing is, I don't trust the results as much and end up needing to verify it anyways. If anything AI has made this harder, since I feel searching the web for authoritative, expert information has become harder as of late.
"Better" is relative to context. It's a multi-dimensional metric flattened to a single comparison. And humans don't always win that comparison.
LLMs are faster, and when the task can be synthetically tested for correctness, and you can build up to it heuristically, humans can't compete. I can't spit out a full game in 5 minutes, can you?
LLMs are also cheaper.
LLMs are also obedient and don't get sick, and don't sleep.
Humans are still better by other criteria. But none of this matters. All disruptions start from the low end, and climb from there. The climbing is rapid and unstoppable.
Better than LLMs.. for now. I'm endlessly critical of the AI hype but the truth here is that no-one has any idea what's going to happen 3-10 years from now. It's a very quickly changing space with a lot of really smart people working on it. We've seen the potential
Maybe LLMs completely trivialize all coding. The potential for this is there
Maybe progress slows to a snails pace, the VC money runs out and companies massively raise prices making it not worth it to use
No one knows. Just sit back and enjoy the ride. Maybe save some money just in case
OK. (I mean, it was an interesting and relevant question.)
The other, related question is, are human coders with an LLM better than human coders without an LLM, and by how much?
(habnds made the same point, just before I did.)
I think we need to accept that in the not too far future LLMs will be able to do most of the mundane tasks we have to do every day. I don't see why an AI can't set up kubernetes, caching layers, testing, databases, scaling, check for security problems and so on. These things aren't easy but I think they are still very repetitive and therefore can be automated.
There will always be a place for really good devs but for average people (most of us are average) I think there will be less and less of a place.
So funny story, I tried using o3 for a relatively complex task yesterday, installing XCode iOS Simulator on an external SSD, it was my first time owning and using a macOS so I was truly lost, I followed everything it told me and by the end of the hour.. things got so bad that my machine couldn't even run normal basic node projects. I had to a proper fresh boot to get things working again. So yeah lesson learned.
No doubt the headline's claim is true, but Claude just wrote a working MCP serving up the last 10 years of my employer's work product. For $13 in api credits.
While technically capable of building it on my own, development is not my day job and there are enough dumb parts of the problem my p(success) hand-writing it would have been abysmal.
With rose-tinted glasses on, maybe LLM's exponentially expand the amount of software written and the net societal benefit of technology.
Coding is not like multiplication. You can teach kids the multiplication table, or you can give them a calculator and both will work. With coding the problem is the "spec" is so much more complicated than just asking what is 5 * 7.
Maybe the way forward would be to invent better "specifiction languages" that are easy enough for humans to use, then let the AI implement the specifciation you come up with.
I like to use llm to produce code for known problems that I don't have memorized.
I memorize really little and tend to spend time on reinventing algorithms or looking them up in documentation. Verifying is easy except the fee cases where the llm produces something really weird. But then fallback to docs or reinventing.
In my experience some of the hardest parts of software development is figuring out exactly what the stakeholder actually needs. One of the talents a developer needs is the ability to pry for that information. Chatbots simply don't do that, which I imagine has a significant impact on the usability of their output.
I disagree—'human coders' is a broad and overly general term. Sure, Antirez might believe he's better than AI when it comes to coding Redis internals , but across the broader programming landscape—spanning hundreds of languages, paradigms, and techniques—I'm confident AI has the upper hand.
There's a lot of resistance to AI amongst the people in this discussion, which is probably to be expected.
A chunk of the objections indicate people trying to shoehorn in their old way of thinking and working.
I think you have to experiment and develop some new approaches to remove the friction and get the benefit.
Of course they are. The interesting thing isn't how good LLMs are today, it's their astonishing rate of improvement. LLMs are a lot better than they were a year ago, and light years ahead of where they were two years ago. Where will they be in five years?
I agree, but I also didn’t create redis!
It’s a tough bar if LLMs have to be post antirez level intelligence :)
For coding playwright automation it has use cases. Especially if you template out function patterns. Though I never use it to write logic as AI is just ass at that. If I wanted a shitty if else chain I'd ask the intern to code it
The fact that we are debating this topic at all is indicative of how far LLMs have come in such a short time. I find them incredibly useful tools that vastly enhance my productivity and curiosity, and I'm really grateful for them.
Sure, human coders will always be better than just AI. But an experienced developer with AI tops both. Someone said, your job won't be taken by AI, it will be taken by someone who's using AI smarter than you.
Gemini gives instant, adaptive, expert solutions to an esoteric and complex problem, and commenters here are still likening LLMs to junior coders.
Glad to see the author acknowledges their usefulness and limitations so far.
From my experience AI for coders is multiplier of the coder skills. It will allow you to faster solve problems or add bugs. But so far will not make you a better coder than you are.
I'm increasingly seeing this as a political rather than technical take.
At this point I think people who don't see the value in AI are willfully pulling the wool over their own eyes.
Correct. LLMs are a thought management tech. Stupider ones are fine because they're organizing tools with a larger library of knowledge.
Think about it and tell me you use it differently.
I think also it depends on the model of course
General LLM model would not be as good as LLM for coding, for this case Google deepmind team maybe has something better than Gemini 2.5 pro
Writing about AI is missing the forest for the trees. The US software industry will be wholesale destroyed (and therefore global software will be too) by offshoring.
The value of LLMs are as a better Stackoverflow. It’s much better than search now because it’s not populated with all the craps that have seeped through over time.
Yes, we are still winning the game, however don't be happy for what is possible today, think what is possible in a decade from now.
In that regard I am less optimistic.
If the human here is the creator of Redis, probably not.
LLMs are using the corpus of existing software source code. Most software source code is just North of unworkable garbage. Garbage in, garbage out.
The trick is much like Zobrist hashing from chess programming, I'm sure the llm has devoured chessprogramming.org during training.
Software engineering is in the painful position of needing to explain the value of their job to management. It sucks because now we need to pull out these anecdotes of solving difficult bugs, with the implication that AI can’t handle it.
We have never been good at confronting the follies of management. The Leetcode interview process is idiotic but we go along with it. Ironically LC was one of the first victims of AI, but this is even more of an issue for management that things SWEs solve Leetcodes all day.
Ultimately I believe this is something that will take a cycle for business to figure out by failing. When businesses will figure out that 10 good engineers + AI always beats 5 + AI, it will become table stakes rather than something that replaces people.
Your competitor who didn’t just fire a ton of SWEs? Turns out they can pay for Cursor subscriptions too, and now they are moving faster than you.
I find LLMs a fantastic frontend to StackOverflow. But agree with OP it's not an apples-to-apples replacement for the human agent.
This is similar to my usage of LLMs. I use Windsurf sometimes but more often it is more of a conversation about approaches.
Gemini may be fine for writing complex function, but I can’t stand to use it day to day. Claude 4 is my go to atm.
Human coders also hate the structures they are embedded in and are willing to call the replacement bluff ..
So your sample size is 1 task and 1 LLM? I would recommend trying o3, opus 4 (API) with web search enabled.
LLMs will never be better than humans on the basis that LLMs are just a shitty copy of human code.
But Human+Ai is far more productive than Human alone, and more fun, too. I think antirez would agree, or he wouldn't bother using Gemini.
I built Brokk to maximize the ability of humans to effectively supervise their AI minions. Not a VS code plugin, we need something new. https://brokk.ai
There is also another side to the mass adoption of LLMs in software engineering jobs: they can quite objectively worsen the output of human coders.
There is a class of developers who are blindly dumping the output of LLMs into PRs without paying any attention to what they are doing, let alone review the changes. This is contributing to introducing accidental complexity in the form of bolting on convoluted solutions to simple problems and even introducing types in the domain model that make absolutely no sense to anyone who has a passing understanding of the problem domain. Of course they introduce regressions no one would ever do if they wrote things by hand and tested what they wrote.
I know this, because I work with them. It's awful.
These vibecoders force the rest of us to waste eve more time reviewing their PRs. They are huge PRs that touch half the project for even the smallest change, they build and pass automated tests, but they enshitify everything. In fact, the same LLMs used by these vibecoders start to struggle how to handle the project after these PRs are sneaked in.
It's tiring and frustrating.
I apologize for venting. It's just that in this past week I lost count of the number of times I had these vibecoders justifying shit changes going into their PRs as "but Copilot did this change", as if that makes them any good. I mean, a PR to refactor the interface of a service also sneaks in changes to the connection string, and they just push the change?
I think there is a common problem with a lot of these ML systems. The answers look perfectly correct to someone who isn't a domain expert. For example, I ask legal questions and it gives me fake case numbers I have no way to know are fake until I look them up. Same with the coding, I asked for a patch for a public project that has a custom !regex style match engine. It does an amazing job, cross referencing two different projects and hands me a very probable looking patch. I ask for a couple changes, one of which can't actually be done, but it creates some syntax that doesn't even compile because its using 'x' as a stand-in for the bits it doesn't have an answer for.
In the end, I had to go spend a couple hours reading the documentation to understand the matching engine, and the final patch didn't look anything at all like the LLM generated code. Which is what seems to happen all the time, it is wonderful for spewing the boilerplate, the actual problem solving portions its like talking to someone who simply doesn't understand the problem and keeps giving you what it has, rather than what you want.
OTOH, its fantastic for review/etc even though I tend to ignore many of the suggestions. Its like a grammar checker of old, it will point out you need a comma you missed, but half the time the suggestions are wrong.
yes of course they are but MBA regard management gets told by McK/Big4 AI could save them millions and they should let go people already as AI can do there work it doesn't matter currently, see job market
Most of the work software engineers do is not fixing complicated bugs.
- LLMs are going to make me a 100x more valuable coder? Of course they will, no doubt about it.
- LLMs are going to be 100x more valuable than me and make me useless? I don't see it happening. Here's 3 ways I'm still better than them.
Looks like this pen is not going to replace the artist after all.
It's that time again where a dev writes a blog post coping.
same as https://news.ycombinator.com/item?id=44127956, also on HN front page
but their rate of improvement is like 1000x human devs, so you have to wonder what the shot clock says for most working devs
It doesn't matter. The hiring of cost center people like engineers depends on the capital cycle. Hiring peaked when money and finance was the cheapest. Now it's not anymore. In the absence of easy capital, hiring will plummet.
Another factor is the capture of market sectors by Big Co. When buyers can only approach some for their products/services, the Big Co can drastically reduce quality and enshittify without hurting the bottom line much. This was the big revelation when Elon gutted Twitter.
And so we are in for interesting times. On the plus side, it is easier than ever to create software and distribute it. Hiring doesn't matter if I can get some product sense and make some shit worth buying.
AI is good for people who have given up, who don't give a shit about anything anymore.
You know, those who don't care about learning and solving problems, gaining real experience they can use to solve problems even faster in the future, faster than any AI slop.
Human coders utilizing LLMs are better
If LLMs really do eventually replace programmers in X years (I don't believe they will), I wouldn't even care in the slightest about losing my job, since we'd effectively have reached singularity state where computers can now do any task; humans would no longer be needed for anything. I couldn't care less about losing my job in that scenario, the world would be fundamentally changed forever. Would the concept of a job even still exist at that point?
This!!!!
LLM is as good as the material it is being trained on, the same applies to AI and they are not perfect.
Perplexity AI did assist me in getting into Python from 0 to getting my code test with 94% covered and no vulnerabilities (scanning tools) Google Gemini is dogshit
Trusting blindly into a code generated by LLM/AI is a whole complete beast, and I am seeing developers doing basically copy/paste into company's code. People are using these sources as the truth and not as a complementary tool to improve their productivity.
Let's not forget that LLMs can't give a solution they have not experienced themselves
Speak for yourself..
of course they are.
Yes i agree:D
... depending on the human (and the LLM). Results may differ in 6 months.
for now. (i'm not a bot. i'm aware however a bot would say this)
[dead]
[dead]
[dead]
[dead]
Corporations have many constraints—advertisers, investors, employees, legislators, journalists, advocacy groups. So many “white lies” are baked into these models to accommodate those constraints, nerfing the model. It is only a matter of time before hardware brings this down to the hobbyist level—without those constraints—giving the present methods their first fair fight; while for now, they are born lobotomized. Some of the “but, but, but…”s we see here daily to justify our jobs are not going to hold up to a non-lobotomized LLM.
[flagged]
Argh people are insufferable about this subject
This stuff is still in its infancy, of course its not perfect
But its already USEFUL and it CAN do a lot of stuff - just not all types of stuff and it still can mess up the stuff that it can do
It's that simple
The point is that overtime it'll get better and better
Reminds me of self driving cars and or even just general automation back in the day - the complaint has always been that a human could do it better and at some point those people just went away because it stopped being true
Another example is automated mail sorting by the post office. The gripe was always humans will always be able to do it better - true, in the meantime the post office reduced the facilities with humans that did this to just one
seems comparable to chess where it's well established that a human + a computer is much more skilled than either one individually
This matches my experience. I actually think a fair amount of value from LLM assistants to me is having a reasonably intelligent rubber duck to talk to. Now the duck can occasionally disagree and sometimes even refine.
https://en.m.wikipedia.org/wiki/Rubber_duck_debugging
I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.