Launching this today. We're working on making it insanely easy to run open source models and get all the benefits you might expect from a hyperscaler solution like OpenAI: RAG, API integrations, fine-tuning but all on open source models, all running locally on a single 3090, or scaled in production on Kubernetes in your own private VPC.
Launching this today. We're working on making it insanely easy to run open source models and get all the benefits you might expect from a hyperscaler solution like OpenAI: RAG, API integrations, fine-tuning but all on open source models, all running locally on a single 3090, or scaled in production on Kubernetes in your own private VPC.