I'm working on a bunch of different projects trying out new stuff all the time for the past six months.
Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.
I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.
In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.
I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.
I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.
I think we’re going to have to deal with the stories of shareholders wetting themselves over more layoffs more than we’re going to see higher quality software produced. Everyone is claiming huge productivity gains but generally software quality and new products being created seem at best unchanged. Where is all this new amazing software? It’s time to stop all the talk and show something. I don’t care that your SQL query was handled for you, thats not the bigger picture, that’s just talk.
One area of business that I'm struggling in is how boring it is talking to an LLM, I enjoy standing at a whiteboard thinking through ideas, but more and more I see push for "talk to the llm, ask the llm, the llm will know" - The LLM will know, but I'd rather talk to a human about it. Also in pure business, it takes me too long to unlock nuances that an experienced human just knows, I have to do a lot of "yeah but" work, way way more than I would have to do with an experienced humans. I like LLMs and I push for their use, but I'm starting to find something here and I can't put my finger on what it is, I guess they're not wide enough to capture deep nuances? As a result, they seem pretty bad at understanding how a human will react to their ideas in practice.
I'm not entirely convinced this trend is because AI is letting people "manage fleets of agents".
I do think the trend of the tiny team is growing though and I think the real driver were the laysoffs and downsizings of 2023. People were skeptical if Twitter would survive Elon's massive staff cuts and technically the site has survived.
I think the era of the 2016-2020 empire building is coming to an end. Valuing a manager on their number of reports is now out of fashion and theres now no longer any reason to inflate team sizes.
When I worked at a startup that tried to maximize revenue per employee, it was an absolute disaster for the customer. There was zero investment in quality - no dedicated QA and everyone was way too busy to worry about quality until something became a crisis. Code reviews were actively discouraged because it took people off of their assigned work to review other people's work. Automated testing and tooling were minimal. If you go to the company's subreddit, you'll see daily posts of major problems and people threatening class-action lawsuits. There were major privacy and security issues that were just ignored.
AI helps you cook code faster, but you still need to have a good understanding of the code. Just because the writing part is done quicker doesn't mean a developer can now shoulder more responsibility. This will only lead to burn out, because the human mind can only handle so much responsibility.
I read a few books the other day, The Million-dollar, One-person Business and Company of One. They both discuss how with the advances of code (to build a product with), the infrastructure to host them (with AWS so that you don't need to build data centers), and the network of people to sell to (the Internet in general, and more specifically social media, both organic and ads-based), the likelihood of running a large multi-million-dollar company all by yourself greatly increases in a way it has never done in the history of humanity before.
They were written before the advent of ChatGPT and LLMs in general, especially coding related ones, so the ceiling must be even greater now, and this is doubly true for technical founders, for LLMs aren't perfect and if your vibed code eventually breaks, you'll need to know how to fix it. But yes, in the future with agents doing work on your behalf, maybe your own work becomes less and less too.
This may date me, but it feels like 1999 again where a small startup can disrupt an industry. Not just because of what LLMs can do in terms of delivered product, but because a small team can often turn on a problem so much faster than a big one can. I really hope that there are hundreds, if not thousands, of three to five person companies forming in basements right now ready to challenge the big players again.
At least for C++, I try to use copilot only for generating testing and writing ancillary scripts. tbh it's only through hard-won lessons and epic misunderstandings and screw-ups that I've built a mental model that I can use to check and verify what it's attempting to do.
As much as I am definitely more productive when it comes to some dumb "JSON plumbing" feature of just adding a field to some protobuf, shuffling around some data, etc, I still can't quite trust it to not make a very subtle mistake or have it generate code that is in the same style of the current codebase (even using the system prompt to tell it as such). I've had it make such obvious mistakes that it doubles down on (either pushing back or not realizing in the first place) before I practically scream at it in the chat and then it says "oopsie haha my bad", e.g.
```c++
class Foo
{
int x_{};
public:
bool operator==(Foo const& other) const noexcept { return x_ == x_; // <- what about other.x_? }
};
```
I just don't know at this point how to get it (Gemini or Claude or any of the GPT) to actually not drop the same subtle mistakes that are very easy to miss in the prolific amount of code it tends to write.
That said, saying "cover this new feature with a comprehensive test suite" saves me from having to go through the verbose gtest setup, which I'm thoroughly grateful for.
I think this is the beginning of the end of early stage venture capital in b2b saas. Growth capital will still be there, but increasingly there will be no reason to raise. It will empower individuals with actual skill sets, rather than those with fancy schools on their resume
" Do what you do best, and let AI do the rest".
Exactly the approach I'm taking with Tunnelmole, which as of right now is still a one person company with no investors.
I focused on coding, which I'm good at. I'm also reasonably good at content writing, I have some articles on Hackernoon before the age of AI.
So far AI has helped with
- Marketing ideas and strategies
- General advice on setting up a company
- Tax stuff, i.e what are my options for paying myself
- The logo. I used stable diffusion and an anime art model from CivitAI, had multiple candidates made, chose one, then did some minor touch ups in Gimp
I'm increasingly using it for more and more coding tasks as it gets better. I'll generally use it for anything repeatable and big refactors.
One of the biggest things coding wise working alone is Code Review. I don't have human colleagues at Tunnelmole who can review code for me. So I've gotten into the routine of having AI review all my changes. More than once, bugs have been prevented from being deployed to prod using this method.
> "Ushering in a new era."
It's ushering in a new era of valley bullshit. If only journalists tried to falsify their premise before blindly publishing it.
> Jack Clark whether AI’s coding ability meant “the age of the nerds” was over.
When was the "age of the nerds" exactly? What does that even mean? My interpretation is that it means "is the age of having to pay skilled programmers for quality work over?" Which explains Bloomberg's interest.
> “I think it’s actually going to be the era of the manager nerds now,” Clark replied. “I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful.”
And they're all going to be people on a subscription model and locked into one particular LLM. It's not going to make anyone powerful other than the owner class. This is the worst type of lie. They don't believe any of this. They just really really hate having to pay your salary increases every year.
> AI is sometimes described as providing the capability of “infinite interns.”
More like infinite autistic toddlers. Sure. It can somehow play a perfect copy of Chopin after hearing it once. Is that really where business value comes from? Quickly ripping other people off so you can profit first?
The Bloomberg class I'm sure is so thrilled they don't even have the sense to question any of this self serving propaganda.
Are there any projects working on models for business management? I feel for skilled technical people the benefit would be from off-loading a lot of the management side and let them focus on the hard problems.
It seems like a more and more recurring shareholder wet dream that companies could one day just be AI employees for digital things + robotic employees for physical things + maybe a human CEO "orchestrating" everything. No more icky employees siphoning off what should rightfully be profit for the owners. It's like this is some kind of moral imperative that business is always kind of low-key working towards. Are you rich and want to own something like a soup company? Just lease a fully-automated factory and a bunch of AI workers, and you're instantly shipping and making money! Is this capitalism's final end state?
AI gets top billing, but the assault via tax code on engineering employment is likely a bigger factor.
Will be curious to see which of these solo-AI ventures survive contact with the market...
Some excellent ideas presented in the article. It doesn't matter if they all pan out, just that they expand our thinking into the realm of AI and its role in the future of business startups and operations.
Revenue per employee, to me, is an aside that distracts from the ideas presented.
I think this is great for the world. So much less waste - imagine all the BMWs that aren’t going to be bought by middle managers or VCs, while people who know what they’re doing can build useful products.
AWS, GCP and other cloud providers play just as large of a role in allowing for tiny teams. Used to need an ops team of 10+ people to do all the stuff on premise that AWS can do
The beatings will continue as profit improves
Funny has this goes hand in hand with the rise in fractional and contract roles
If they are 1099 they aren’t part of the team, right?
Those who rely on AI for coding do you worry you will lose the ability to code without AI assistance
[dead]
[flagged]
Like Johnny Depp in that movie..
The subhead makes a specific misstatement:
> Startups used to brag about valuations and venture capital. Now AI is making revenue per employee the new holy grail.
The corrected form is:
> Startups used to brag about valuations and venture capital. Now AI is making rate of revenue growth per employee the new holy grail.
Specifically, as with all growth capitalism, it is long-term irrelevant how much revenue each employee generates. The factor that is being measured is how much each employee increases the rate of growth of revenue. If a business is growing revenue at +5% YoY, then a worker that can increase that rate by 20% (to +6% YoY) is worth keeping; a worker that can only increase revenue by 5% contributed +0% YoY after the initial boost and will be replaced by automation, AI, etc. (This is also why tech won’t invest in technical debt: it may lower expenses, but those one-time efficiencies are typically irrelevant when increasing the rate of growth of income results in far more income than the costs of the debt.)
It’s true, especially with the “vibe” movement happening in real-time on X… “you can just do things” — I am building ai app layers b2c/b2b and while I do have an ml technical co-founder, I am largely scaling this with AI from strategy, visuals to coding. For example, with Claude created a framework for my company to scale, then built an AI powered dashboard in cursor around it as the command center. At scale we don’t need a team of more than ~5 to reach 7 fig MRR.
Greg Isenberg has some of the best takes on this on X. He articulates the paradigm shift extremely well.. @gregisenberg — one example: https://x.com/gregisenberg/status/1936083456611561932?s=46)
https://archive.ph/YHr9s