I have a M1 MacBook Pro, and it is alright for running TensorFlow.
I have two conda environments set up, tf (CPU), and tf_m1 (Apple silicon GPU, CPU).
Some of my TF models, mainly multiple rower models have to run on just the CPU, but most of my models use the GPU.
What I have read is that the M1 is fast and efficient to train small models w/ Tensorflow but NVIDIA wins for big models.
I have been playing around with the M1 Pro for compute by making some toys that take advantage of its hardware directly, mostly in arm64 asm and c. I won’t pretend I was able to get every cycle optimized—I’m still working on the code—but it’s great so far.