>No one remembers how to fix it.
This thought exercise also gets interesting with software development. There are few prospects available for new grads and junior devs now. Relentless offshoring, headcount reductions, and AI promises from CEOs have hollowed out the tech landscape. There are very few opportunities for young people to professionally develop the knowledge to keep the systems running.
And now most startups are focused on eliminating staff via AI. The people who would keep the systems running. I'm not sure where all of this leads to in a few years.
Just go back to books or other sources of knowledge and restart from scratch. It shouldn't take long to get your thinking back.
W.R.T. any large industry crash, no one's going to care if you can do your Angular (or specific tech skill) stuff in 1 days vs. 5 days, so why the emphasis on speed in this scenario? Both System 1 or 2 thinking is fine here.
We've enjoyed some pretty great technical advances for the past 40 years, even with a 10 year "dark age" we're still net ahead. We can rebuild and relearn a lot of stuff in 10 years.
A lot of the comments on HN lately are rightfully focused on this formative brain exercise that leads to intuition and conceptual understanding that is chiselled away by the shortcuts that GenAI provides. I wonder where the gain of productivity from GenAI and the drop off in 'our brain'-quality intersects.
> In E.M. Forster’s short story The Machine Stops, he paints a future where a vast machine handles every aspect of human life. People live isolated lives, fully dependent on the machine. They don’t know how it works. They only know how to ask it for things. When the machine breaks down, society collapses. No one remembers how to fix it.
I used to think about this in math class. I could figure out what to do with my calculator most of the time, but I didn’t have any intuitive sense of how things worked. The sine, cosine, and tangent functions are still just black boxes to me, I have no idea what they actually do or how I would calculate their values. I often daydreamed about finding myself on a desert island, needing to make use of trigonometry to rebuild civilization, but not being able to find the angles that I needed.
Lots of other skills are lost this way. I don’t know how to join wood or sew a stitch, but I do know how to operate a nail gun and work a sewing machine. I couldn’t fix either device, but if I couldn’t find anyone to fix them and couldn’t obtain new ones, I would likely have bigger problems to worry about. Most people will only ever need to view these devices as black boxes; the benefits of specialization generally offset the costs introduced by abstraction, absent major market disruptions (e.g. supply chain breakdowns, changing regulatory frameworks, etc.). Most people in human history have spent their lives as generalists on a farm. This hedges the individual against a lot of risk (the generalist can likely always find some work to do), but the real strides in risk management are made by specialists living in urban settlements.
If there would be massive EMP that fried all chips around me, I would be for short while somewhat useless.
Most of what I've done is in software, I could not build computer from electronic parts, not even full adder from memory. Maybe I could read some schematics, but most measurement devices use chips as well, it would very difficult.
I think preparation should be skills, useful in any environment.
Having AI write the kind of software humans write is not even scratching the surface of what will probably happen. Just as generative CAD tools produce mechanical designs that would be prohibitively complex and time consuming for humans to design and verify, we're eventually going to see, and by eventually, I mean pretty soon, software no human could have written.
It’s a fine enough journal entry, and I agree with the underlying sentiment of the piece, but can I crowdsource an explanation of the conclusive pith?
> Learning to learn is a noble idea. But more important is learning to unlearn, and knowing when to resist the comfort of automation.
I feel strongly about “teaching people to teach themselves” which seems a direct analog to “learning to learn”, but I am at a complete loss for what “learning to unlearn” means, especially as it relates to resisting automation.
Is the idea that you need to “unlearn” wanting to “learn” automation so you can keep “learning” more deeply about things you have already “learned”?
> When the machine breaks down, society collapses. No one remembers how to fix it.
The example is unrealistic because that's a very easily anticipated single point of failure.
We've been building systems without single points of failure for millennia.
There will never be only one machine. Already in AI we have dozens of models. They come from multiple diverse - unaffiliated! - cultures, countries, and ideologies.
I'm not convinced that I should worry about people that would let AI run their lives.
They're gonna pay the price for their foolishness, and paying the price is the universe's mechanism for keeping itself in balance.
Take what's going on the US for example...
If we had to colonize another planet I think we'd need 10,000 of the absolute best in just semiconductor fabrication/design if you wanted to create computers.
If farmers had blogs during industrialization, I suspect there would’ve been a lot of this.
Most people don’t know how to grow a potato. So?
Maybe in our AI future we will stop referencing pop science (i.e., not replicated, but salable)
> it’s easy to get by without deeply understanding the code you’re deploying.
I feel that's been an issue for a long time, with the heavy reliance on dependencies. AI tools are really just a furtherance of the model already in use.
I have the luxury of developing software as a craft; not as a vocation. I deliberately do stuff "the hard way," because I feel more fulfilled, doing so.
our daily AI bad fear mongering story give us today
On the other hand, maybe one day "thinking without AI" will become as absurd a notion as "thinking without a brain".
You can read the Forster short story referenced at https://standardebooks.org/ebooks/e-m-forster/short-fiction/... . It’s impressive speculative fiction by any measure, let alone for 1909.