LocalAI: Local models on CPU with OpenAI compatible API

  • LocalAI is the OpenAI compatible API that lets you run AI models locally on your own CPU! Data never leaves your machine! No need for expensive cloud services or GPUs, LocalAI uses llama.cpp and ggml to power your AI projects!

    LocalAI supports multiple models backends (such as Alpaca, Cerebras, GPT4ALL-J and StableLM) and works seamlessly with OpenAI API. Join the LocalAI community today and unleash your creativity!

    GitHub: https://github.com/go-skynet/LocalAI

    We are also on discord! Feel free to join our growing community!

    https://discord.gg/uJAeKSAGDy

  • The privacy angle is really important — but just as important is avoiding all of the vulnerabilities that OpenAI seems to have.

    Great to see the speed this is progressing and the collab with k8sgpt / prometheus / spectro cloud / etc. Community effort!

  • Here's a little example we put together on how to deploy on Edge Kubernetes using Kairos

    https://kairos.io/docs/examples/localai/

  • Interesting. What are the cpu/memory/storage requirements for running LocalAI?