Intent-tuned LLM router that selects the best LLM for a user query

  • An example with different fine-tuned models (especially smaller/cheaper ones) would probably be more interesting than running against a bunch of similar foundation models. For example throwing in some code-generation models and demonstrating that it picks those for coding problems.

  • I was shocked how much better CodeQwen1.5 was at Python compared to ChatGPT 4.

  • Very cool... nice work.