Fast and Easy Infinitely Wide Networks with Neural Tangents

  • Can someone explain the implications of this for performance, or what we can do now that we couldn’t do before?

  • It really looks more and more like JAX is the internal winner after the tf 2.0 fiasco