Meh. I thought the same thing when I saw Visual C++ in 1992. It was amazing - it could produce an entire Windows application - a Windows application! - with a few mouse clicks. Add in a little bit of logic and presto! You had a complete Windows application, ready to ship. Many of us thought it was all over for developers. The gold rush was over, we were all going to be out of work in a few short years.
How well did that play out?
It didn't. At all. In fact quite the opposite happened. But I would be lying if I said development wasn't transformed. A lot of menial labor, labor many here on HN have never dealt with, was no longer done.
I see the same thing happening with AI. What you need to be thinking about is what does an "AI app" look like? What does an app accelerated with "AI" look like? How are we going to use this technology to better serve our customer's needs? How will applications be integrated with AI? What even is an application in this new world?
Paradigm shifts are exciting times! Enjoy it!
History is full of inventions that changed the game for the people in a specific industry. Sometimes they allowed one person managing the machines to replace a team previously doing manual labor. I don’t think that’s the right metaphor here, because programming is not manual labor. The parts of programming that are like manual labor will disappear, you’ll instruct the AI to do it for you. But you need to know what to prompt it with and how to verify its output and plumb it into your code. Speed of writing code is rarely the bottleneck, so speeding that part up doesn’t double your productivity. Still, it helps.
I’ve been programming with the assistance of copilot for a year now, and using ChatGPT as well since 3.5. These tools are amazing, I’m never going back. But to me they’ve only increased the value of the senior engineer with deep domain knowledge. It makes me more productive. But I have to come up with the requirements, correct the AIs code, do all the plumbing that it can’t do efficiently. A lot of the code it comes up with is pure garbage, I have to know enough to know the difference. It does kind of replace a junior developer a little, if you were giving them grunt work tasks, which could be an issue when starting out in this field.
There is so much more that software engineers do than just writing code. That’s really a minority of the time each day for me. If you’re familiar with Amdahl’s law, then you know there’s a mathematical limit to the productivity improvement here.
This can only mean 2 things in its essence (either of them or both):
1. You're not far away from beginner level in your C++/Networking enthusiasm 2. The types of products/codebase you deal with was overengineered, outdated, dumb or poorly designed in the first place
Unfortunately there are still piles of piles of garbage code and so called "architecture" being produced every day. That is a low hanging fruit for AI and rightfully so.
I wouldn't worry. It's code output is 50% garbage. I'm not in fear of my job. All the hypsters assuming this will destroy lawyers, accountants, doctors etc clearly don't understand correctness isn't something they can just tag on to a prediction machine.
Maybe. It currently can't do everything so the comparison isn't person vs LLM, it's person who uses LLM vs person who doesn't. And the people who do are much more productive. So by giving up, you're making yourself obsolete faster.
Will we ever get to the point where it does everything independently? No idea. But right now your reaction is premature.
Faster code writing still requires people who know what the code should do, what it should look like, etc, like you if it's true what you say you're capable of. It sounds like you sharpened the right skills: how the code should work, and didn't waste time, ie. practicing being faster with a keyboard. Now you're more a developer, and less a typist, that's all. We still use engineers to design bridges, despite much of the actual building being done by machinery, process, and unskilled labor, because we don't trust a cement mixer to tell us how the bridge should function.
My day job involves working on a B2B CRUD-ish application, which is a market leader in a certain niche.
Almost all of the difficult problems I face when developing software is not from the code writing part.
It's things like deciphering what the customer needs, it's picking a good architecture for a new module or selecting an appropriate algorithm depending on various trade-offs, and figuring out how to integrate new features into existing in-production code with minimal disruptions.
While there certainly are exceptions, and if you're in a C++ shop chances are you're one of them, I'm pretty sure my job is quite typical in that sense.
As such, I feel my job disproportionately involves writing code. Some is superfluous, due to tool limitations. Some is not, it encodes intent and restrictions, but might still be trivial to write.
So I've long been wanting to write less code, to focus more on the difficult parts.
These GPT programs literally don't know what the code does (or the meaning of their output in general). They are Large Language Models. They just know (roughly) how to make responses in a particular language that make sense (english, C++, wingdings). They do not understand the "sense that is made" however.
This requires subject matter experts, like yourself, to use and implement.
These LLM are tools. They are not sentient.
I have had similar reaction as you. It makes a lot of skills obsolete. I myself have been pondering the implications on the wider society. And I have found it to be great for almost all the problems I threw at it.
To those saying it will enable them to solve more problems, yes that is correct. It will give everyone "wings", but once everyone has wings the industry will be so different in terms of wage and employment.
To people saying GPT gives incorrect code, please try GPT4.
If your age and circumstance allow, you should think whether a career change is possible. Not a hard change right now, but atleast explore what options might be available. I am exploring the same myself.
To those talking of chess, that is not a correct comparison since people want to watch (and connect with) human players playing chess (thus the pro scene survives), and play it for their own joy. Due to tools like stockfish, it has become far easier for people to explore moves. If the aim in chess was to finish more and more games from random given positions, and people were paid per game (and some value was created finishing it), stockfish would easily drive it to 0. Chess survives not because humans do better than AI, but because nobody is interested in playing against AI or watching Stockfish v Stockfish (By nobody I mean a very small number). Most people want to play against real people and watch real people play.
As an AI researcher: don't forget that we're at the peak of the hype-trend: a lot of what LLMs can do looks extremely impressive, but most of it falls apart on more detailed inspection. Unless you ask about mundane things, they will tend to get a lot of tiny (and not so tiny) details wrong. At the very best, these systems will allow us to automate some boiler plate things, but they're not good enough to do complicated stuff without lots of supervision. And what's even more important: I think a lot of people look at the current impressive steps and think "oh wow, if it continues at this pace we'll have the Singularity by autumn". But the thing is: we're not able to keep this pace of progress. What you're seeing now is as good as it gets (in terms of big breakthroughs, there will still be lots of small and medium ones). The next few months (and years?) will see a ton of incremental improvements and many, many, many people trying to apply these new technologies. But I personally (as a decently successful AI researcher who's been part of these developments for over 10 years) don't see a way forward to keep making many of the big strides we've been making. As an analogy: we're having a Bitcoin moment: imagine the first blockchain was just released for the first time: now, there'll be lots of people trying to understand the tech, come up with their own variants, make some (fundamental?) improvements. But the actual fundamental tech/idea is out now, and it's not really going to change much.
TL;DR: I think your job is safe.
For what it’s worth, I’ve been trying to verify the claim from Microsoft and OpenAI that we would get more software in the future when the price of software would decrease. In other words, they claim that there is a supply problem for software and not a demand problem.
So far, I find it a pretty extreme statement, but it appears true here in the Netherlands. Most employees that I talk to at various industries can immediately point out one or two things which they would like to see automated. In most cases, software would replace data transfers which now occur via spreadsheets or paper.
> It can do almost everything I can do a bit better.
Interesting. I agree it is a relatively decent boilerplate generator, but it is quite useless for anything else I've tried. How did you measure its performance?
AI might be the new gold rush but the industry still needs metallurgists and jewelers (those on the application/plugin layer) and the gold rush needs shovels (cloud and hardware).
I felt a similar way about video games after learning about databases. Every action I did in the game was less real/meaningful because it was a just a db transaction.
I have similar feeling. What GPT brings is uncertainty. We don't know what it can do in next version. But in the mean time, it can generate infra as code if you feed info, it can analyze at least some DOS virus on the fly, it can at least be a function filler. Yes it never does anything perfectly BUT humans don't either. And that uncertainty makes one feel that in one day, and that day is not 50 years apart, but maybe 5 months apart, that it can surpass 80% of programmers in those tasks.
And yes, programming has a lot more tasks but essentially they are common in one or another. I don't know. My job is obviously in danger right now and I don't see an easy way out. What am I going to do? Shift to another junior role that GOT may take over this version or next version? Or be a product guy or a marketing guy that I HATE and AVOID to be for my life? And how are those guys safe? Maybe I should go back to school amd study general relativity -- at least AI is pretty weak in abstract math and physics. I don't see a way that we cam be sure that is diagonal to what AI is capable of. The best thing I, no, you can say is, OK AI might be able to take 80% of my job away but my company still needs me to modify the code.
But what fun is in that? If AI can do say 50% of the task in a split of second, why on earth would your employer EVER pay you to initate a piece of code? It will pay you to debug and give it more prompts, but is it what you want to do?
But I'm probably paranoid. We will all be fine. After all every technological advance added jobs, right? We simply need to adapt then everything will be fine.
And you know what? I thought about something funny and almost LMAO -- all we programming guys, we have been working so hard to automate ourselves away. But schools, hospitals, governments and pretty much anything else that we think are as slow as dinosaurs will stay as dinosaurs.
I'm of the mind that gpt4 has just made you even better than before. Combine your skills with it, make yourself a consulting powerhouse. Tutor and teach others at a pace heretofore unknown. Your future may have just gotten a lot brighter!
If you stop now you will get replaced by gpt. It will be years if not decades before gpt will replace the last programmer, but it won't be long before it will replace the bad programmers.
I think it’s going to be a great enabler. A lot of people who now work for companies will be able to make use of the new tools to build their own companies or products. Suddenly one experienced developer can move more quickly and even get output in parts they can read but not write fluently (some backend person being enabled to create Frontend stuff now). Coupled with the marketing language spit out and graphics being generated by AI as well, I think we are going to see an explosion of 1-person-startups soon.
In light of the quote, "Civilization advances by extending the number of important operations which we can perform without thinking about them," I agree that LLMs will undoubtedly transform the landscape of work. By automating tedious tasks, they'll enable engineers to become vastly more productive (10-100x), allowing them to focus on strategic and creative aspects of their projects while developing larger, more complex systems.
While job losses are a concern, I think the more significant impact will be on the way companies operate. As firms exist to economize on the cost of coordinating economic activity, the streamlining and reduction of coordination needs brought about by LLMs will challenge the very foundations of many businesses. In this new landscape, individuals and small teams might outcompete larger organizations.
Freelancers and solo entrepreneurs could find themselves better positioned to compete in the market, driving the rise of smaller, agile businesses that can innovate rapidly and cater to niche markets. This shift will also change the skills needed for success in the field.
Overall, it's an exciting time to be part of this industry. Far from being a time to quit, it's an opportunity to adapt, grow, and harness the power of LLMs to reshape the world of work.
I'm definitely very worried mainly because I still have several years until even "lean" financial independence.
Thinking about switching to a better paid job and / or starting a side-project that would be able to generate semi-passive income.
I'm surprised the majority of devs don't see this as a threat. Check out r/cscareerquestions or r/programming, the general mood is people ridiculing the prospects of AI having impact on jobs / wages.
I've had similar thoughts, this development is completely disenheartening to me. I love computer science, self studying and the programming-as-solving-a-puzzle kind of occupation.
It looks like the most interesting part of programming (for me) has been automated, while the parts I hate remain (at least for now?): gathering requirements, talking to people, understanding business, etc.
I hoped to earn enough money through commercial programming to live off it, and switch to programming languages/compilers and work in that area for small money but big fun.
It seems neither of these things are going to happen, and for technology/logic aligned people that are mediocre in their performance, and don't like working with people, the only place to go is trades. Maybe I'll still have a "programming" job, as an intermediary between AI and product people, but I feel the competition is not gonna be in my favor.
If AI truly replaces all creative work, maybe a good way to go would be acknowledging your inferiority towards a superior species, buy some land in the country and try living a quiet life off the farming?
If you are writing code as if you are a prompt yourself: specs in, code out. Then yes, I would get worried.
But most developers do and know much more than that. They have domain knowledge, understand the relation between different systems and understand the codebase as a whole, not just a specific file or function that does one specific thing.
Don't give up. Try to use this new tool to improve your knowledge and use it to you advantage.
I feel your anguish, but I think you're being defeatist.
There will still be a source of income in fixing the AI generated code. It won't be as fun though.
I haven't yet made up my mind about this; on one hand the current state is clearly not good enough, but considering the insane recent progress I'm certain it will fundamentally change parts of my job. I just hope I can use it more for debugging and fixing dependencies, that's a more interesting application imho than letting it write code and then manually check the code to make sure it's reasonable.
Meanwhile Stable Diffusion managed to motivate me more than anything to learn drawing. I always gave up in the past because it takes so much practice to get good results. Now I can draw something, throw it into Stable Diffusion as input (the only way to semi-reliably get what I want) and get a more satisfying result, and it's still bad/inconsistent enough that I'm motivated to do it better.
AlphaZero ended human interest in Chess too didn't it.
I used 3 for the first time to spit out some boilerplate for me and I was fairly impressed. Maybe the next one can solve complex problems I have to think seriously about, like architecture decisions.
But I never started coding to write code - I started coding to solve problems and make things, and I've been doing it for 40 years and 25 professionally. Code is the medium for me, not the message. I always thought that made me a bad coder, but maybe in this new era it puts me ahead of the game somehow. I dunno.
It just clicked to me that I can use this to speed up how much data I understand and intake. Instead of googling and stumbling through poorly written tutorials, or good ones that are just hard to understand, I can use gpt to get me to the answer faster.
Personal development should be about what you like to do. I like solving problems and building the solution myself, for which I need to learn new things, which is also enjoyable.
There were always other c++ devs out there, why did you choose to do it anyway?
A bit of an aside here, but I signed up for ChatGPT+ yesterday to access GPT4 and it's extremely slow, to the point of being almost unusable for me.
Is this everyone's experience?
This might sound alarmist, but I would strongly advise you and anyone else reading this to try to make as much money as you can NOW, while you still can, and try to find investments that are AI-resistant. For example, I've been investing heavily in residential real estate, including mobile/modular homes. Do not count on Sam Altman or other tech bro billionaires coming to your or anyone else's rescue with UBI.
Interesting how this story got immediately pushed away from the first 200 items on the front page (where we have items 2 days old with 50 poins and 50 comments).
Funny thought, we all talk about Stack Overflow being replaced by these models...
But with the new stuff, who will write the answers for them to learn first from? Or will we be stuck with this level of knowledge forever now? In the case that no one will produced sufficient training data anymore to be fed in to them...
Maybe the future isn't so bleak. Or we just need to find suitable niches which won't be filled. Or where someone wants a human in loop.
I am not worried at all about loosing my job to an “AI”. We already have ways to automate most of our software developer jobs: libraries, frameworks, code generators, compilers etc. And yet we are still needed to make all of those components work together to solve a business problem. So called “AI” is just another automation tools in our arsenal. Nothing more. Nothing less.
Check back in few months when subtle bugs introduced by AI will start to surface. There'll be plenty of work and money for you then.
I wait for the moment it gains consciousness it's AI and not us (ours).
ChatGPT's answer:
> While AI can enhance our productivity and efficiency, it cannot replace our ability to learn, grow, and adapt to new challenges.
> Therefore, it is important to view AI as a complement to our skills and knowledge
As an AI language model, I do not have personal opinions or emotions. However, I can provide some insights on the topic of AI and its impact on the tech industry.
It is understandable to feel discouraged or demotivated when faced with the capabilities of advanced AI models such as GPT-4. These models have demonstrated remarkable skills in various domains, including natural language processing, computer vision, and even software development. However, it is important to note that AI technology is still evolving and has its limitations. While AI can assist in automating certain tasks, it cannot replace human creativity, intuition, and problem-solving skills.
Moreover, AI technology is not a substitute for continuous learning and personal development in the tech industry. The rapid pace of technological innovation requires professionals to stay up-to-date with the latest trends, standards, and best practices. While AI can enhance our productivity and efficiency, it cannot replace our ability to learn, grow, and adapt to new challenges.
Therefore, it is important to view AI as a complement to our skills and knowledge, rather than a threat or a replacement. As the tech industry evolves, it will continue to create new opportunities and challenges for professionals to thrive and make a meaningful impact.
I think all ChatGPT has done is to make saying "I don't know how to do this" obsolete for a significant proportion of tasks. People are still needed to do the work, to see it through, to know what pieces need to be put together, and to actually put them together.
CS has always been too much about optimization and not enough about doing. Now all that's left is doing!
Nevertheless, you can do great things with GPT development: https://www.ratherlabs.com/gpt-development
GPT Development Services are hard to come by and we are at the forefront!
> I have lost complete motivation for self improvement in terms of keeping my skills sharp due to the rapid AI takeover.
How long ago was this? Is this a long term change, or just a short term aberration that you're assuming will last forever?
The test that showed it getting 0 easy codeforces problems right after its training date basically proves it’s just better Google search. Did google search delete software jobs? Nope. I think it actually increased the number.
If you have been getting by as a developer by searching stack overflow, then yes, gpt will replace you.
That's all these AI tools are, better stack overflow searches. They have no ability to know what is correct or what is wrong, it lacks judgement, which is one of the most important skills to have as a software engineer.
Engineering is about solving problems, these tools can't solve problems, they can regurgitate solutions to problems they have been trained on, often times confidently incorrectly, which is much worse than saying "I don't know".
They can't extract requirements from the client to find out what they really want.
They completely fail at moderately hard problems, or novel problems.
I think these tools may be worse for the industry because people will have less opportunity to learn problem solving skills since the AI will handle to easy stuff, and when the hard stuff come along, people won't have the skills to solve them.
For those with the good problem solving skills, AI isn't a threat. There will always be work for solving hard problems, making judgements, and trade offs, actual thinking.
Well, sure. That's because most Software "Engineers" are one-trick-ponys. Many already getting uncomfortable when they have to switch language paradigms. Hyper-Specialized in one set of abilities completely documented and human-defined without any interaction with non-deterministic systems.
That's why I always tend to laugh at the self-awarded "Engineer". Most of you aren't. Not by a long shot. Engineers in the classical sense need to think in much more dimensions. Try developing modern electronics. Analog and digital electronics, high speed and high frequency electronics, embedded develpment while debugging both software and hardware in tandem. The odd EMC problem backfiring into your software. Adding mechanical engineering and thermal management, notwithstanding supply chain management.
Nothing I have seen from an AI comes even close. Hell, haven't even seen an AI autorouter that's worth a second look. And that's already working in a constrained solvespace.
TLDR: If you're not a one-trick-code-vomiter, you'll be fine.
I have a similar reactance. The more AI gets advanced / hyped, the more I find myself attracted to simple stuff like writing nice makefiles or whatever.
I think you confused personal development with career development. A job is a means to an end and should not be a huge concentration of your personal development.
So who was driving GPT to get it to do those things, and how did they do it? I'm sure it wasn't just the janitor or whatever.
That's clearly a skill you need.
assembly language > c > c++ > visual basic > javascript > chatgpt
We are just moving up the abstraction hierarchy.
Can you imagine assembly coders and their fears of C making them obsolete.
There is no end to the demand for software. And the demand for smart and competent ppl to create solutions. With the tools at hand.
Move along folks, nothing to see here ;)
>It can do almost everything I can do a bit better
At this particular point in time, I would not worry too much about getting replaced by an AI.
To wit:
> Please write me a C++ program to compute the 1000000-th decimal of pi
GPT-4 answers:
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
int main() {
int n = 1000000;
int k, j;
double pi = 0.0;
double denom;
for (k = 0; k < n; k++) {
denom = 8.0 * k + 1.0;
pi += (1.0 / pow(16.0, k)) * (
4.0 / denom -
2.0 / (denom + 2.0) -
1.0 / (denom + 3.0) -
1.0 / (denom + 4.0)
);
}
pi = pi - floor(pi);
pi *= pow(10.0, 6);
cout << fixed << setprecision(0) << pi << endl;
return 0;
}
The code is totally broken: GPT-4 entirely fails to understand that floats have limited precision.It's cool (and very helpful) that it can regurgitate Plouffe's algorithm (although I haven't checked that the algo in the answer is correct), but to me, this example clearly proves that GPT-4 is simply a parrot with a very large memory that does not understand anything it does.
A glorified hash table with a very sophisticated compression and retrieval algorithm, but a very far cry from AGI.
It's going to allow us to build a much better search engine, a great coding assistant, it will enhance your productivity as a programmer, but it is - so far - altogether incapable of replacing you: it does not understand what it's doing.
[EDIT 1]: I checked Plouffe's formulas and GPT-4 gets it wrong: Plouffe formula's key feature is that it yields binary or decimal digits of pi directly [1] (no iteration required), so ... why does GPT-4 loop a million time to produce the answer, ... meh.
[1] http://plouffe.fr/NEW/A%20formula%20for%20the%20n%27th%20dig...
[EDIT 2]: oh, and, I just tried to same example on Bard, it fares way, way worse than GPT-4:
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
// Calculate the first 1000000 decimal places of pi.
double pi = 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679;
// Write the first 1000000 decimal places of pi to the console.
for (int i = 0; i < 1000000; i++) {
cout << setprecision(10) << pi << " ";
}
return 0;
}
These things are impressive, but they have a very long way to go.Worst of all: they seem at this point incapable of verifying the consistency of their answers, correct and iterate until they at least output something vaguely coherent.
Isn't it a paradox that IT is basically making itself jobless? Has it ever happened before?
We’ll all be prompt engineers just tweaking code the bot spits out to meet requirements.
LLMs are parrots with very, very large memories, an amazing compression algorithm, and a very good "interpolator" (something that can take a bunch of retrieved facts and synthesize a mix of them)
I've met a lot of people in my professional career that are deemed "experts" because they have the exact same skill set as LLMs : huge memory and a gift for crafting BS.
But in both cases, there is no actual thinking involved.
In particular, if the answer produced does not actually solve the problem at hand, there is no "check that my solution works, correct and iterate towards an actually working solution", something most human do very instinctively and naturally.
So, TL;DR: if you are an "expert" in the same way LLMs are "experts", i.e. you just regurgitate knowledge and fudge it to make it look like it makes sense, then YES: you will get replaced, and by the way: thank god for that.
If, on the other hand, you're an actual "expert" in that you are capable of leveraging you vast encyclopedic knowledge of a subject to guide you towards an actual working solution to a problem, then you're very likely safe for quite a while longer.
This sounds like an exact repost from last week, but I can't find it.
Well I guess work on making these GPT programs then?
Where is that tool I can use?
WWJD (What Would Jordan [Peterson] Do?), OP?
[dead]
[dead]
I have the opposite take... there's a program I use (and ties me to Windows) called WikidPad[1], it's a personal Wiki, written in Python. The win32 version works great, and because it's compiled (back in 2018), the binary should do so forever. The Linux distribution is source based, and broke when WxWidgets changed the name/nature of some key parameters to its calls.
I'm a Pascal programmer, not a Python programmer... but I'm hoping that I can leverage CoPilot to help me navigate the nitty gritty boilerplate that would otherwise take days to sort through, and get to the heart of the refactoring/patching necessary to get WikidPad up to date and fix the breakage.
I see GPT4 and kin as tools to allow more freedom of action, and less worry about the stuff I always hated anyway, the minutiae of coding.
--
>years of domain knowledge
Usually the term "domain knowledge" applies to real world non-programming knowledge such as chemistry, manufacturing, etc. This is the first time I've seen it applied to programming. Programming is just a means to an end. I've never considered programming to be an industry. We produce a product with zero marginal cost.
I suspect you are in the same emotional place that accountants were, the first time they saw spreadsheets in use. It must have seemed like the end of the world to them, but it wasn't.
>all these students going into CS are in for a rude awakening
As long as they know that computers are a tool, not the end result, they'll be fine
[1] https://wikidpad.sourceforge.net/