Ask HN: How to Keep Up with LLMs? [Linux, Self-Hosting, Info]

  • My first stop would be llama.cpp and compatible models on your own machine. You should be able to run quantized 7B and 13B models, try them out and see if they work.

    Though for "personal workflow", unless you want to be able to play with the internals of the models or are worried about privacy, I'd just use ChatGPT (in fact I do, despite having llama.cpp setup to run various models, I always use ChatGPT for personal stuff and programming question)

  • 1) what do you want to use it for?

    2) /r/localllama is good, and then also the “open llm leaderboard” and the “lmsys llm leaderboard”