On Learning:
My wife, a high school teacher, remarked to me the other day “you know, it’s sad that my new students aren’t going to be able to do any of the fun online exercises that I used to run.”
She’s all but entirely removed computers from her daily class workflow. Almost to a student, “research” has become “type it into Google and write down whatever the AI spits out at the top of the page” - no matter how much she admonishes them not to do it. We don’t even need to address what genAI does to their writing assignments. She says this is prevalent across the board, both in middle and high school. If educators don’t adapt rapidly, this is going to hit us hard and fast.
I notice a couple of things in the pro-AI [1] posts: All start writing in a lengthy style like Steve Yegge in his peak. All are written by ex-programmers who are on the management/founder side now. All of them cite programmer friends who claim that AI is useful.
It is very strange that no real open source project uses "AI" in any way. Perhaps these friends work on closed source and say what their manager wants them to say? Or they no longer care? Or they work in "AI" companies?
[1] He does mention return on investment doubts and waste of energy, but claims that the agent nonsense works (without public evidence).
Angst is the best way to put it.
I use AI every day, I feel like it makes me more productive, and generally supportive of it.
But the angst is something else. When nearly every tech related startup seems to be about making FTEs redundant via AI it leaves me with a bad feeling for the future. Same with the impact on students and learning.
Not sure where we go from here. But this feels spot on:
>I think that the best we can hope for is the eventual financial meltdown leaving a few useful islands of things that are actually useful at prices that make sense.
> I really don’t think there’s a coherent pro-genAI case to be made in the education context
My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.
Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.
> I really don’t think there’s a coherent pro-genAI case to be made in the education context.
I think it’s simple: the reign of the essay is over. Educators must find a new way to judge a student’s understanding.
Presentations, artwork, in class writing, media, discussions and debates, skits, even good old fashioned quizzes all still work fine for getting students to demonstrate understanding.
As the son of two teachers I remember my parents spending hours in the evenings grading essays. While writing is a critical skill, and essays contain a good bit of information, I’m not sure education wasn’t overindexing on them already. They’re easy to assign and grade, but there’s so much toil on both ends unrelated to the core subject matter.
Wholeheartedly agree. I can't help but think that proponents of LLMs are not seriously considering the impact it will have on our ability to communicate with each other, or to reason on our own accord without the assistance of an LLM.
It confounds me how these people would trust the same companies who fueled the decay of social discourse via the internet with the creation of AI models which aim to encroach on every aspect of our lives.
It’s not angst to see the people who run the companies we work for “encourage” us to use Claude to write our code knowing full well it’s their attempt to see if they really can fire us without a hit in “productivity”.
It’s not angst to see students throughout the entire spectrum end up using ChatGPT to write their papers, summarize 3 paragraphs, and use it to bypass any learning.
It’s not angst to see people ask a question to an LLM and talk what it says as gospel.
It’s not angst to understand the environmental impact of all this stupid fucking shit.
It’s not angst to see the danger in generative AI not only just creating slop, but further blurring the lines of real and fake.
It’s not angst to see the vast amount of non-consensual porn being generated of people without their knowledge.
Feel like I’m going fucking crazy here, just day after day of people bowing down at the altar and legit not giving a single fuck about what happens after rofl
> Just to be clear, I note an absence of concern for cost and carbon in these conversations. Which is unacceptable. But let’s move on.
hold on, its very simple. here's a oneliner even degrowthers would love: extra humans cost a lot more in money and carbon than it cost to have an llm spin up and down to do this work that would otherwise not get done.
One aspect is missing: content creation.
AI is completely destroying the economics of putting out free information. LLMs still relies on human beings to experience and document the real world, but they strip those humans of the reward. Creators lose the income, credit and community that come with having an audience. In the long term, I fear that a lot of the quality information will disappear because it's no longer worth creating.
I wrote a bit about this earlier in a very relevant thread: https://news.ycombinator.com/item?id=44099570
The real value in vibe coding does not come to developers who are already out at the bleeding edge of technology. Vibe codings true value is for people who know very little about programming, who know just enough to be able to debug the a type issue, or who have the time to read and research the issues outside of the general structure provided by LLMs. I've never created an Android app before. But I can do that in 24 hours now.
These tools are 2 years old. They're vastly superior to their versions from two years ago. As people continue to utilize and provide feedback these tools will continue to improve and become better and better at providing customers (non-programmers) access to features, tools, and technologies that they would otherwise have to rely on a team of developers for.
Personally I cannot afford the thousands of dollars per hour required to retain a team of top shelf developers for some crazy hair brained Bluetooth automation for my house lighting scheme. I can, however, spend a weekend playing around with Claude (and chat GPT and...). And I can get close enough. I don't need a production tool. I just need the software to do the little thing, the two seconds of work, that I don't want to do every single day.
Who's created a RAG pipeline? Not me! But I can walkthrough the BS necessary to get PostGRE, FastAPI, and Llama 3 set up so that I start automating email management.
And that's the beauty: I don't have to know everything anymore! Not spend months trying to parse all the specialized language surrounding the tools I'll need to implement. I just need to ask the questions I don't have answers for, making sure that I ask enough that the answers tie back into what I do know.
And LLM's and vibe coding do that just fine.
> I really don’t think there’s a coherent pro-genAI case to be made in the education context
I use ChatGPT as an RNG of math problems to work through with my kid sometimes.
I disagree with genAI not having an education use case.
I think a useful LLM for education would be one with heavy guardrails, which is “forced” to provide step-by-step back and forth tutoring instead of just giving out answers.
Right now hallucinations would be problematic, but assuming its in a domain like Math (and maybe combined with something like Wolfram to verify outputs), i could see this theoretical tool being very helpful to learning mathematics, or even other sciences.
For more open-ended subjects like english, history, etc then it may be less useful.
Perhaps only as a demonstration, maybe an LLM is prompted to pretend to be a peasant from Medieval Europe, and with text to voice we could have students as a group interact with and ask questions of the LLM. In this case, maybe the LLM is only trained on historical text from specific time periods, with settings to be more deterministic and reduce hallucinations
I finally tried Claude Code for most of last week on a toy Typescript project of moderate complexity. It's supposedly the pinnacle of agentic coding assistants, and I tend to agree, finding it far ahead of Copilot et al. Seeing it working was like a bit of magic, and it was very addictive. It successfully distracted me from my main projects that I code mostly by hand.
That said, and it's kind of hard to express this well, not only is the actual productivity still far from what the hype suggests, but I regard agentic coding to be like a bad addictive drug right now. The promise of magic from the agent is always just seems around the corner: just one more prompt to finally fix the rough edges of what it has spat out, just one more helpful hint to put it on the right path/approach, just one more reminder for it to actually apply everything in CLAUDE.md each time...
Believe it or not, I spent several days with it, crafting very clear and specific prompts, prodding with all kinds of hints, even supplying it with legacy code that mostly works (although written in CSharp), and at the end it had written a lot of that almost works, except a lot of simple things just wouldn't work, not matter how much time I spent with it.
In the end, after a couple of hours of writing the code myself, I had a high a quality type design and basic logic, and a clear path to implementing the all the basic features.
So, I don't know, for now even Claude seems mostly useful only as a sporadic helper within small contexts (drafting specific functions, code review of moderate amounts of code, relatively simple refactoring, etc). I believe knowing when AI would help vs slow you down is becoming key.
For this tech to improve, maybe a genetic/evolutionary approach would be needed. Given a task, the agent should launch several models to work on the problem, with each model also launching several randomized approaches to working on the problem. Then the agent should evaluate all the responses and pick the "best" one to return.
> Go programming language is especially well-suited to LLM-driven automation. It’s small, has a large standard library, and a culture that has strong shared idioms for doing almost anything
+1 to this. thank you `go fmt` for uniform code. (even culture of uniform test style!). thank you culture of minimal dependencies. and of course go standard library and static/runtime tooling. thank you simple code that is easy to write for humans..
and as it turns out for AIs too.
> at the moment I’m mostly in tune with Thomas Ptacek’s My AI Skeptic Friends Are All Nuts. It’s long and (fortunately) well-written and I (mostly) find it hard to disagree with.
Ptacek has spent the past week getting dunked on in public for that article. I don't think it lends you a lot of credence to align with it.
> If you’re interested in that thinking, here’s a sample; a slide deck by a Keith Riegert for the book-publishing business which, granted, is a bit stagnant and a whole lot overconcentrated these days. I suspect scrolling through it will produce a strong emotional reaction for quite a few readers here. It’s also useful in that it talks specifically about costs.
You're not wrong here. I read the deck and the word that comes to mind is "disgusting". Then again, the morally bankrupt have always done horrible things to make a quick buck — AI is no different.
> horrifying survey of genAI’s impact on secondary and tertiary education.
I agree with this. It's probably terrible for structured education for our children.
The one and only one caveat: Self-Driven language learning
The one and only actual use (outside of generating funny memes) I've had from any LLM so far, is language learning. That I would pay for. Not $30/pcm mind you . . . but something. I ask the model to break down a target language sentence for me, explaining each and every grammar point, and it does so very well. sometimes even going to explain the cultural relevance of certain phrases. This is great.
I've not found any other use for it yet though. As a game engine programmer (C++) The code I write now a days quite deliberate and relatively little compared to a web-developer (I used to be one, I'm not pooping on web devs). so if we're talking about the time/cost of having me as a developer work on the game engine, I'm not saving any time or money by first asking Claude to type what I was going to type anyway. And it's not advanced enough yet to hold the context of our entire codebases spanning multiple components.
Edit, Migaku [https://migaku.com/] is a great language learning application that uses this
As OP, I'm not sure it's worth all that CO2 we're pumping into our atmosphere.
> I think about the carbon that’s poisoning the planet my children have to live on.
Tbh I think we’re going to need a big breakthrough to fix that anyway. Like fusion etc.
A bit less proompting isnt going to save the day
That’s not to say one shouldn’t be mindful. Just think it’s no longer enough
Poor HN.
Is there a glimpse of the next hype train we can prepare to board once AI gets dulled down? This has basically made the site unusable.
This is probably the best opinion piece I've read so far on GenAI
> My input stream is full of it: Fear and loathing and cheerleading and prognosticating on what generative AI means and whether it’s Good or Bad and what we should be doing. All the channels: Blogs and peer-reviewed papers and social-media posts and business-news stories. So there’s lots of AI angst out there, but this is mine. I think the following is a bit unique because it focuses on cost, working backward from there. As for the genAI tech itself, I guess I’m a moderate; there is a there there, it’s not all slop.
Let’s see.
> But, while I have a lot of sympathy for the contras and am sickened by some of the promoters, at the moment I’m mostly in tune with Thomas Ptacek’s My AI Skeptic Friends Are All Nuts. It’s long and (fortunately) well-written and I (mostly) find it hard to disagree with.
So the Moderate is a Believer. But it’s offset by being concerned about The Climate and The Education and The Investments.
You can try to write a self-aware/moment-aware intro. It’s the same fodder for the front page.
I think the concerns about climate and CO2 emissions are valid but not a show stopper. The big picture here is that we are living through two amazing revolutions at the same time:
1) The emergence of LLMs and AIs that have turned the Turing test from science fiction into basically irrelevant. AI is improving at an absolutely mind boggling rate.
2) The transition from fossil fuel powered world to a world that will be net zero in few decades. The pace in the last five years has been amazing. China is basically rolling out amounts of solar and batteries that were unthinkable in even the most optimistic predictions a few years ago. The rest of the world is struggling to keep up and that's causing some issues with some countries running backward (mainly the US).
It's true that a lot of AI is powered by mix of old coal plants, cheap Texan gas and a few other things that aren't sustainable (or cheap if you consider the cleanup cost). However, I live in the EU and we just got cut off from cheap Russian gas, are now running on imported expensive gas (e.g. from Texas) and have some pet peeves about data sovereignty that are causing companies like OpenAI, Meta, and Google to have to use local data centers for serving their European users. Which means that stuff is being powered with electricity that is locally supplied with a mix of old dirty legacy infrastructure and new more or less clean infrastructure. That mix is shifting rapidly towards renewables.
The thing is that old dirty infrastructure has been on a downward trajectory for years. There are not a lot of new gas plants being built (LNG is not cheap) and coal plants are going extinct in a hurry because they are dirty and expensive to operate. And the few gas plants that are still being built are in stand by mode much of the time and losing money. Because renewables are cheaper. Power is expensive here but relatively clean. The way to get prices down is not to import more LNG and burn it but to do the opposite.
What I like about things that increase demand for electricity is that they generate investments in providing solutions to clean energy and actually accelerate. The big picture here is that the transition to net zero is going to vastly increase demands on power grids. If you add up everything needed for industry, transport, domestic and industrial heating, aviation, etc. it's a lot. But the payoffs are also huge. People think of this as cost. That's short term thinking. The big picture here is long term. And the payoff is net zero and cheap power making energy intensive things both affordable and sustainable. We're not there yet but we're on a path towards that.
For AI that means, yes, we need a lot of TW of power and some of the uses of AI seem frivolous and not that useful. But the big picture is that this is changing a lot of things as well. I see power needs as a challenge rather than a problem or reason to sit on our hands. It would be nice if that power was cheap. It so happens that currently the cheapest way to generate power happens to be through renewables. I don't think dirty power is long term smart, profitable, or necessary. And we could definitely do more to speed up its demise. But at the same time, this increased pressure on our grids is driving the very changes we need to make that happen.
[dead]
> On the money side? I don’t see how the math and the capex work. And all the time, I think about the carbon that’s poisoning the planet my children have to live on.
The "math and capex" are inextricably intertwined with "the carbon". If these tools have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem, and we'll all be better off. If the tools have no net value at a market-clearing price for energy (as purported), then it won't be a problem.
I mean, maybe the productive way to say this is that we should more formally link the environmental cost of energy production to the market cost of energy. But as phrased (and I suspect, implied), it sounds like "people who use LLMs are just profligate consumers who don't care about the environment the way that I do," and that any societal advancement that consumes energy (as most do) is subject to this kind of generalized luddite criticism.
Minor off-topic quibble about streams: I’ve been learning about network programming for realtime multiplayer games, specifically about input and output streams. I just want to voice that the names are a bit confusing due to the perspective I adopt when I think about them.
Input stream = output from the perspective of the consumer. Things come out of this stream that I can programmatically react to. Output stream = input from the perspective of the producer. This is a stream you put stuff into.
…so when this article starts “My input stream is full of it…” the author is saying they’re seeing output of fear and angst in their feeds.
Am I alone in thinking this is a bit unintuitive?
I guess we're all trying to figure out where we sit along the continuum from anti-AI Luddite to all-in.
My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.
I'm happy to use Copilot to auto-complete, and ask a few questions of ChatGPT to solve a pointy TypeScript issue or debug something, but stepping back and letting Claude or something write whole modules for me just feels sloppy and unpleasant.