The most interesting aspect of this model is that it is very training efficient: https://pixart-alpha.github.io/
It also has the same idea as Dalle 3 to train the model on synthetic captions.
Why name it PixArt when it covers a broader range of media than simply pixel art? Super confusing.
The source code license is AGPL-3.0 license. Perfect for these kinds of models: https://github.com/PixArt-alpha/PixArt-alpha
From their GitHub:
>This integration allows running the pipeline with a batch size of 4 under 11 GBs of GPU VRAM. GPU VRAM consumption under 10 GB will soon be supported, too. Stay tuned.
Seems to have pretty good understanding and performance.
This appears to be work sponsored by Huawei.
Thought this was going to be a new optical sensor series :(
I think it's kind of disingenuous maybe to claim such improvements in training efficiency when they rely on:
- Existing models for data pseudo-labelling
- ImageNet pretraining
- A frozen text encoder
- A frozen image encoder
This has problems usually not seen with current systems. It's produced human characters with one thick leg and one thin leg. Three legs of different sizes. Three arms.
It can do humans in passive poses, but ask for an action shot and it botches it badly. It needs more training data on how bodies move. Maybe load it up with stills from dance, martial arts, and sports.