Ask HN: How to benchmark different LLM models in parallel?

  • Running models locally, on your development machine, will be slow. You need beefy GPUs to get good token/sec speeds.

    Run the models in the cloud, each one on a separate machine, and then invoke them remotely. You can skip the time/cost and use various APIs from 3rd parties directly.