I coined the term "agenticist" to describe this mindset shift. It's not about technical skills (I'm no ML expert), but about treating AI as a thinking partner rather than a productivity tool.
But I'm curious what HN thinks. Are we just rebranding existing practices? Or is there something fundamentally different about approaching problems with AI as a collaborator from the start?
Four things I've noticed agenticists do differently:
- They ask "what's the agentic approach?" before defaulting to traditional solutions
- They focus on emergence over control (letting conversations evolve rather than scripting everything)
- They treat AI failures as thinking opportunities, not bugs
- They bring AI into strategy discussions, not just execution
The controversial part: I think this mindset matters more than knowing how to fine-tune models or write perfect prompts.
What do you think – is "agenticist" a useful distinction, or just consultant BS?
I coined the term "agenticist" to describe this mindset shift. It's not about technical skills (I'm no ML expert), but about treating AI as a thinking partner rather than a productivity tool.
But I'm curious what HN thinks. Are we just rebranding existing practices? Or is there something fundamentally different about approaching problems with AI as a collaborator from the start?
Four things I've noticed agenticists do differently:
- They ask "what's the agentic approach?" before defaulting to traditional solutions - They focus on emergence over control (letting conversations evolve rather than scripting everything) - They treat AI failures as thinking opportunities, not bugs - They bring AI into strategy discussions, not just execution
The controversial part: I think this mindset matters more than knowing how to fine-tune models or write perfect prompts.
What do you think – is "agenticist" a useful distinction, or just consultant BS?
And if you work this way, what would you call it?