> Running ~1000 iterations for a 483px wide Mona Lisa on the google colab GPU runtime only takes around 40 seconds!. Compared to the CPU version which takes several hours to do the same for a smaller image
Isn't this excruciatingly slow? I remember my old 486 computer did run life at full-screen full-resolution in realtime (probably 25fps at 800x600 ?) How it has come to that after 20 years? How can a 20 year old CPU outperform a modern GPU by one order of magnitude? Is it all fault of lots of useless intermediary language layers?
The parenthetical "(Consider a loaf of bread, with each slice being dithered Mona Lisa)." is one of my favorites of any technical article
As a next step: how simple can you make the initial state. Can you have something that occupies much less space, but then grows to something that resembles the target image.
This comment may be too meta but this post just made me appreciate Hacker News so much! Classic creative hacking guided by nothing but pure curiosity. Love it!
A similar post can be found here (2020, implemented with backsearch):
Gotta be used for some capture-the-flag or so, where after running it a clue to the next step is revealed.
I also wonder how effective this would be as a compression algorithm (just for the fun of it). Are the black&white spots too noisily distributed to make it compressable, or better than the resulting image after running GoL?
Also interesting that so little data is needed for my brain to understand it's a picture of the David statue.
Kaggle recently hosted a very relevant competition - "Conway's Reverse Game of Life" [0]. A write-up of the 1st place might be an interesting read [1].
[0] https://www.kaggle.com/c/conways-reverse-game-of-life-2020/o...
[1] https://www.kaggle.com/c/conways-reverse-game-of-life-2020/d...
Similar post from mid-2020, using PyTorch instead of JAX, and using a "continuous" (gradient-based) hill-climbing: http://hardmath123.github.io/conways-gradient.html
And the HN discussion from the time: https://news.ycombinator.com/item?id=23095190
So my understanding is that the Game of Life is an undecidable system, so you basically can't write a solvable system of equations that will tell you that your initial state will produce a Mona Lisa. Even after reading the article I don't really understand what he's doing to make this happen. And especially after stating that most states are Garden of Eden states! Can anyone ELI5?
My understanding is Conway's game of life can be done by simply convolving a 3x3 array of all ones (except a central 0) through every pixel, and then looking at the results of that convolution to see if the pixel is dead or alive, e.g. https://stackoverflow.com/questions/46196346/why-does-my-gam...
If that's the case, would a nice solution be to model it as a recurrent neural network, e.g. pytorch or jax, with a single convolutional layer, with a single kernel, which you don't backprop to, and then try to minimise to loss with reference to the target image, by backpropagating to the input image?
It then becomes highly analogous to Google's deepdream, but instead of optimising the image for the output class, you optimise it to the output image.
For a similar challenge: https://en.wikipedia.org/wiki/Proof_game
It never fails to amaze me what an infinite number of monkeys with keyboards are able to achieve if you give them enough time :)
Did you consider something like the following to speed up the calculation? This method can go forward _really_ fast. Not sur e if it can also go backward...
https://pzemtsov.github.io/2015/04/24/game-of-life-hash-tabl...
Some years ago I did a similar thing using genetic algorithm. I was researching it's use in generative art.
Here's a video of it in action: https://youtu.be/xgAigVfpIYc
I suspect slightly better (or faster) results would be achieved if instead of comparing against a specific dithering pattern nominated by the author, they instead compared downscaled candidate patterns against the greyscale target image.
This could be useful using least significant bit steganography to embed the start state and maybe a QR code as the end state, or something else able to withstand the noisy output.
Previously: https://news.ycombinator.com/item?id=26374009
A very interesting form of steganography.
Has anyone run GOL on "random" noise images to see if they result in any decipherable output? :)
This is of something I've often seen in GPU-accelerated code: people get really amazing speedups because they're comparing to something that started out dog slow. Every example given is 10 generations or fewer, so 1000 iterations is probably not more than 10,000 generations. That should not take several hours to run.
The code looks reasonable enough to me, so I assume this is just Python being Python.
I did the same thing, but with gradient descent. You can create a soft version of the game of life, that is differentiable. Here is my messy collab notebook: [1]
[1] https://colab.research.google.com/drive/12CO3Y0JgCd3DVnQeNSB...
Edit: only 1 step though, not 4, as in the OP. I couldn't get my differentiable version to converge more than 1 step into the past.