amazing meta-level joke spanning months of effort... all articles on the author's site are themselves AI generated, and the article purporting to bemoan the impact of AI is itself AI Generated Content
If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.
In the case in the article the author believed they were the expert, and believed their wife should accept their argument on that basis alone. That isn't authority; that's ego. They were wrong so they clearly weren't drawing on their expertise or they weren't as much of an expert as they thought, which often happens if you're talking about a topic that's only adjacent to what you're an expert in. This is the "appeal to authority" logical fallacy. It's easy to believe you're the authority in question.
...weâve allowed AI to become the authority word, leaving the rest of us either nodding along or spending our days explaining why the confident answer may not survive contact with reality.
The AI aspect is irrelevant. Anyone could have pointed out the flaw in the author's original argument, and if it was reasoned well enough he'd have changed his mind. That's commendable. We should all be like that instead of dogmatically holding on to an idea in the face of a strong argument against it. The fact that argument came from some silicon and a fancy random word generator just shows how cool AI is these days. You still have to question what it's saying though. The point is that sometimes it'll be right. And sometimes it won't. Deciding which it is lies entirely with us humans.
Is anyone else finding that the real job now is pushing back against AI-backed âsuggestionsâ , from clients, managers, and even so-called âAI expertsâ? They sound confident, but too often collapse in design and practice.
How are you handling this shift? Do you find yourself spending more time explaining âwhy notâ than actually building?
This isnât an issue for experienced software engineers who understand the limitations of these LLMs and also will scrutinise the chatbotâs answer if they see a bug it generated. Non-engineers, vibe-coders wonât know any better and click âAccept allâ.
Everyone wants to be a âprogrammerâ but in reality, no-one wants to maintain the software and assume that an âAIâ can do all of it i.e Vibe coding.
What they really are signing up for is the increased risk that someone more experienced will break your software left and right, costing you $$$ and you end up paying them to fix it.
A great time to break vibe coded apps for a bounty.
Why on earth hadnât the wife bought the domain name before it even got to this stage. Neither the author nor the AI will be able to argue with a auccessful project. âtil the thing isnout there, this is just chin-stroking.
The domain name incident absolutely isnât a strong enough case to justify pivoting a career.
The clients suggesting features and changes might be a reason to pivot a career, but towards programming and away from product/system development. I mean, let the client make the proposal, accept the commission at a lower rate that doesnât include what youâd have charged to design it, and then build it. AI ought to help get through these things faster, anyway, and youâve saved time on design by outsourcing to the client. In theory, you should have spare time for a break, a hobby, or to repeat this process with the next client thatâs done the design work for you.
I agree with all the points about agency, confidence, experience (the author used âauthorityâ). We must not let LLMs rob us of our agency and critical thinking.
Would you accept the view of a total stranger? No, you would ask someone else to review that opinion. Same with AI, don't just fire up ChatGPT and be finished. Cross reference the answer with other LLMs
Sunday morning insight. Relying on AI is like copying someone else's homework solution.
A "C" student.
Just cross-check and recheck everything it tells you. Like how people are discovering that writing extensive AI unit tests integration tests etc is great for software engineering. It works for building your world-view too.
I think a lot of people are not in the habit of doing this (just look at politics) so they get easily fooled by a firm handshake and a bit of glazing.
So, what was the domain? Iâm dying to know! I want to pass my own judgement on it.
I loved this article. It put in words a subliminal brewing angst Iâve been feeling as I happily use LLMs in multiple parts of my life.
Just yesterday, a debate between a colleague and I devolved into both of us quipping âwell, the tool is sayingâŚâ, as we both tried to out-authoritate each other.
>Mind The Nerd is an independent publication built for thinkers, creators...
Every article on this website looks to be almost wholly AI generated. Pure slop.
Trust me, put in the work and you'll thank yourself for it, you'll learn to enjoy the process and your content will be more interesting.
I saw no mention of Brandolini's law [1] - or Bullshit Asymmetry principle. In fact, the piece hints at a corollary to it. Which is: Bullshit from "an authoritative sounding source" takes 100x as much effort to refute. There is a bias in us to prefer form over function. Persuasion and signaling are things. I know this as I have to battle code-review tools which needlessly put out sequence diagrams and nicely formatted README.md .for.every.single.PR. Just reading those are tiresome.
Didnât expect this one to get flagged, thank you everyone for the great feedback.Learning as a i go.
> And these AI-fueled proposals arenât necessarily bad. Thatâs what makes them so tricky. [...] Every idea has costs, trade-offs, resources, and explanations attached. And guess who must explain them? Me.
Don't explain. Don't argue. Simply confirm that the person fully understands what they're asking for despite using AI to generate it. 99% of the time the person doesn't. 50% of the time the person leaves the conversation better off. The other 50% the lazy bastards get upset and they can totally fuck off anyway and you've dodged a bullet.
Office jobs that pay enough to achieve "middle-class" lifestyles are decreasing and mostly closed to new generations. Software engineering and similar STEM fields that were once one of the few that promised the illusion of meritocratic security and class mobility are fading away. And like Upton Sinclair's quote[0], when I sounded the alarm (prematurely) ~2018-2021 in industry to other software engineers that big salaries and high demand were on the decline, I was met with resistance and disbelief.
What the future holds for 99.999% of humanity who isn't an owner or somehow locating a lucrative niche specialty is more or less globally flattening into similar states of declining real wages for almost everyone. Meanwhile, megacorp capital owners and their enabling corrupt government regimes are more and more resembling racketeering and organized crime syndicate aristocracies with extreme wealth distribution disparities that generally aren't getting any better.
The situation of greater desperation for income invariably drives people to non-ideal choices:
a. Find a new field of work that make less money
b. Sacrifice ethics to work at companies that cause greater harm in exchange for more money
c. Assume the on-going risks of launching a business or private consulting practice
d. Stay and agree to greater demands for productivity, inconvenience, bureaucracy, and micromanagement for less pay
e. Give up looking for work, semi-retire, and move to somewhere like another state or country where the cost-of-living cheaper
---
0. It's difficult to get a man to understand something when his salary depends on his not understanding it.
If you use AI to communicate with me, you won't get a reply from me. I have no further interest in communication with you.
Nurtiing is such an important step in the process that it appears three times in the diagram.
It sounds less like that you "give in" to AI and more like that you have some weird opposition to your wife's ideas and always believe her to be wrong.
I stopped reading after the first paragraph or two.
[dead]
> And these AI-fueled proposals arenât necessarily bad. Thatâs what makes them so tricky. Theyâre often plausible, sometimes even smart, but some come with strings. Every idea has costs, trade-offs, resources, and explanations attached. And guess who must explain them? Me. The guy whoâs now debating not just people, but people plus the persuasive ghostwriter in their pocket
Donât spend your time analyzing or justifying your position on an AI-written proposal (which by definition someone else did not spend time creating in the first place). Take the proposal, give it to YOUR AI, and ask it to refute it. Maybe nudge it in your desired direction based on a quick skim of the original proposal. I guarantee you the original submitter probably did something similar in the first place.