Stream OpenAI responses from functions using Server Sent Events

  • Hi, I'm the OpenFaaS founder, a bit about this new blog post and capability:

    OpenAI models can take some time to fully respond, so we’ll show you how to stream responses from functions using Server Sent Events (SSE).

    With the latest versions of the OpenFaaS helm charts, watchdog and python-flask template, you can now stream responses using Server Sent Events (SSE) directly from your functions. Prior to these changes, if a chat completion was going to take 10 seconds to emit several paragraphs of text, the user would have had to wait that long to see the first word.