> Are people not finding these tools useful, even with the significant investment and hype in the space?
That sounds like there's a flawed assumption buried in there. Hype has very little correlation with usefulness. Investment has perhaps slightly more, but only slightly.
Investment tells you that people invested. Hype tells you that people are trying to sell it. That's all. They tell you nothing about usefulness.
Disgust at all the hype. Worry over being made obsolete. Lazy negativity ("merely token predictors") in an attempt to sound knowledgeable. Worry over not understanding the tech. Distress over dehumanising AI use in hiring etc. Herd psychology.
I see 2 parts that contribute:
1. Failed expectations - hackers tend to dream big and they felt like we're that close to AGI. Then they faced the reality of a "dumb" (yet very advanced) auto-complete. It's very good, but not as good as they wanted it.
2. Too much posts all over the internet from people who has zero idea about how LLMs work and their actual pros/cons and limitations. Those posts cause natural compensating force.
I don't see a fear of losing job as a serious tendency (only in junior developers and wannabes).
It's the opposite - senior devs secretly waited for something that would off load a big part of the stress and dumb work of their shoulders, but it happened only occasionally and in a limited form (see point 1 above)
I'm not really anti-AI. I use AI every day and is a ChatGPT pro user.
My concerns are:
1) Regardless of whether AI could do this, the corporation leaders are pushing for AI replacement for humans. I don't care whether AI could do it or not, but multiple mega corporations are talking about this openly. This is not going to bode well for us ordinary programmers;
2) Now, if AI could actually do that -- might not be now, or a couple of years, but 5-10 years from now, and even if they could ONLY replace junior developers, it's going to be hell for everyone. Just think about the impact to the industry. 10 years is actually fine for me, as I'm 40+, but hey, you guys are probably younger than me.
--> Anyone who is pushing AI openly && (is not in the leadership || is not financially free || is an ordinary, non-John-Carmack level programmer), if I may say so, is not thinking straight. You SHOULD use it, but you should NOT advocate it, especially to replace your team.
A lot of the hype is very short-term and unrealistic, such as AGI. On the other hand it's easy to underestimate the impact in a million mundane things.
> Are people not finding these tools useful, even with the significant investment and hype in the space?
How exactly would someone find hype useful?
Hell, even the investment part is questionable in an industry that's known for "fake it till you make it" and "thanks for the journey" messages when it's inevitably bought by someone else and changes dramatically or is shut down.
I am pretty dumb dude so take this with a grain of salt .
Majority AI today can create/simulate a "Moment" but not the whole "Process". For example,You can create a "short hollywood movie clip" but not the whole "Hollywood movie". I am pretty sure my reasoning is incorrect so I am commenting here to get valid feedback.
Nobody likes their livelihood becoming a commodity. Especially not one of the most arrogant groups of people on the planet.
It's like Ozempic in Hollywood, everyone is using it secretly.
AI sucks the fun out of everything.
It's even worse when you had made that fun your livelihood. Now it's sucked the fun out of everything and put you out of a job.
I was going to make a post about this, any pro AI comment I make gets downvoted, and sometimes flagged. I think HN has people who:
1. Have not kept up with and actively experimented with the tooling, and so dont know how good they are.
2. Have some unconscious concern about the commoditization of their skill sets
3. Are not actively working in AI and so want to just stick their head in the sand
Here's why (for me, at least):
Change is uncomfortable and scary, and AI represents a pretty seismic shift. It touches everything from jobs and creativity to ethics and control. There's also fatigue from the hype cycle, especially when some tools overpromise and underdeliver.
I think the hype is the reason. The performance of the tools is nowhere near the level implied by the hype.
Also, HN loves to hate things, remember the welcome dropbox got in 2007?
There are a lot of people on HN who will be replaced by AI tools and that's hard to cope with.
People are scared of the unknown. They are scared that their livelyhoods might be impacted.
My autism flavour, I have a weakness in communication, and AI spits out better writing than I do. Personally I love that it helps me. Autism is a disability and AI helps me through it.
Imagine however if you're an expert in communication; then this is a new competitor that's undefeatable.
Well, the internet in general has a very strong anti-AI sentiment to be honest. If you even say anything positive about it on most social media sites (Twitter, Reddit, BlueSky, Mastodon, Threads, Instagram, etc) a large percentage of the audience will all but call for you to be burnt at the stake. In a sense, Hacker News is barely any different from the rest of the internet there.
The reactions basically seem to range from "AI is useless because it's inaccurate/can't do this" to "AI is evil because of how it takes jobs from humans, and should never have been invented".
Still, the former is probably the bigger reason here in particular. LLMs can be useful if you're working within very, very general domains with a ton of source material (like say, React programming), but they're usually not as good as a standard solution to the issue would be, especially when said issue isn't as set in stone as programming might be. So most of these solutions just come across as a worse way to solve an already solved problem, except with AI added as a buzzword.
With AI humans aim to automate some forms of intelligent work. People that do this kind of work don't necessarily like that, for obvious reasons, and many HN participants are part of that cohort.
Something that I haven't seen in the other comments: whoever controls the AI has a lot of power. Now that people seem to move from Google to LLMs and blindly believe whatever they read, it feels scary to know that those who own the LLMs are often crazy and dangerous billionaires.
AI has (some limited) benefits, and many huge and proven drawbacks (used in the Israel genocide, used to disrupt elections in the US and Europe, used to spy on people)
So yes, there's a healthy criticism of blindly allowing a few multi-billionnaires to own a tech that can rip off the fabric of our societies
The perception that I have of AI is two goals:
1) A keyword to game out investment capital from investors
2) A crutch for developers who should probably the be replaced by AI
I do believe there is some utility and value behind AI, but its still so primitive that its a smarter auto-complete.
Any result produced by current AI is suspect until proven otherwise.
Any result comes at very high relative cost in terms of computing time and energy consumed.
AI is the polar opposite of traditional logic based computing --- instead of highly accurate and reliable facts at low cost, you get unreliable opinions at high cost.
There are valid uses cases for current AI but it is not a universal replacement for logic based programming that we all know and love --- not even close. Suggesting otherwise smacks of snake oil and hype.
Legal liability for AI pronouncements is another on-going concern that remains to be fully addressed in the courts. One example: An AI chatbot accused a pro basketball player of vandalism due to references found of him "throwing bricks" during play.
just give a go at vibe coding a moderately complex system and you’ll realize that this is only hype, nothing concrete
it’s a shame that this “thing” has now monopolized tech discussions
1)VC driven hype. Stop claiming to have invented God, and people will stop making fun of you for saying so.
2)Energy/Environment. This stuff is nearly as bad as crypto in terms Energy Input & Emissions per Generated Value.
3)A LOT of creatives are really angry at what they perceive as theft, and 'screwing over the little guy'. Regardless of whether you agree with them, you can't just ignore them and expect that their arguments will just go away.
My main objection to AI is that sooner or later, one of the AI labs is going to create an entity much "better at reality" (capable) than people are, which maybe would turn out OK or not-too-bad if the lab would retain control over the entity, but no one has a plan that would enable a person or a group to retain control of such an entity. IMHO current AI models are controllable only because they're less cognitively capable than the people exercising control over them.
I don't claim to be able to predict when such an AI that is much more capable than people will be created beyond saying that if the AI labs are not stopped (i.e., banned by the major governments) it will probably happen some time in the next 45 years.
[dead]
Practical AI vs hype AI is what I see the biggest distinction on.
I haven't seen people negatively comment on simple AI tooling, or cases where AI creates real output.
I do see a lot of hate on hype-trains and, for what it's worth, I wouldn't say it's undeserved. LLMs are currently oversold as this be-all end-all AI, while there's still a lot of "all" to conquer.