Ask HN: People with new Macs / computers with GPU's, do you run LLM's locally?

  • I do this. Ollama makes it very easy - just pull the model you want. The great thing is the ability to test them in the same tasks, there's a huge difference in comparable models.

    You can set it up in your editor of choice - I use Zed, and it's just listed as one of the providers you can choose.

    In terms of performance - it works decently well.

  • Cool idea, however when I have to create an account for a service just to test it out, I will naturally decline to do so. Maybe you could upload a sample document for people to play with?