Real-Time Latent Consistency Model

  • Hi, I've built the demo. Unfortunately, it's running on a single GPU and can only be used concurrently by a few users. For a better real-time experience, you would need a dedicated machine.

    You can get the source here and how to run: https://github.com/radames/Real-Time-Latent-Consistency-Mode... See video here: https://twitter.com/radamar/status/1718783886413709542 or on the Github readme.

    FYI, this is made possible due to a new technique: https://latent-consistency-models.github.io, fine-tuning an existing models. The author will soon publish the training script. We'll see all the cool image models running at this speed! I'm excited for this! I can see many interesting experiments and projects emerging from this.

  • Well. Everything in AI seems to be going faster than I thought, even including the assumption that things will go faster than I expect.

    This will be great for creating truly interesting avatars during video calls, instead of simplified fitting of facial landmarks to a 3d model (if the temporal consistency can be fixed)

  • Is there a non-live demo somewhere? I don't think huggingface's max of 4 users is going to be able to handle the HN crowd. Still curious as to what it looks like in action though.

  • impressive progress, although it's still not real time enough and the frames change too much to be considered smooth.