I find that if you check when the LLM's data was updated, and then choose a framework/language version release date that's before that, then it's pretty good.
In my case I use tailwind css a lot and found that sticking to v3.4.3 created the best output from LLMs.
Not really, there's many many ways to skin the cat. I've more or less settled into Claude Sonnet 3.5 being my go-to for the moment. In general it does as well or better than 4o, but where it really shines is in actually following my system prompts. I have standing instructions to essentially be minimalistic but Claude is much less 'forgetful' and arguably more on point with answers. Both need new conversations started regularly as token size gets too large. I do not find o1 preview any better at code for my purposes, although it is waaay better at math. There's no reason much smaller, even local models can't answer many questions but then you'd be model switching a lot. All of them are still tools that need practice and a developing intuition to get the best out of.
JavaScript/TypeScript probably has the most written about it online. Can’t go wrong!
They do better on web stuff, and dynamic languages.
Theoretically LLMs should perform better for languages/frameworks that are older and changed less frequently. Framework examples would be Django and Rails. I'd suspect they would do much worse with recent JS frameworks.