Show HN: Litellm – Simple library to standardize OpenAI, Cohere, Azure LLM I/O

  • Great start! Are you planning to add to following:

    Retries w exponential backoff, Caching, Streaming output, Function-calling support

  • Take a look at llm-client a similar library that also support chat, async and more llm providers https://github.com/uripeled2/llm-client-sdk

  • This is amazing. Really needed something like this to standardize all my different AI APIs!

    On a side note - I love how quickly your team is shipping! Do keep it going!

  • This is much needed. For someone looking to quickly implement, having a simple interface goes a long way.

  • Very cool Ishaan!

  • >completion(..., azure=True)

    Why like this?

  • [dead]