Intel proposes XeGPU dialect for LLVM MLIR

  • Inference is going to be interesting in 2025.

    By that time we will have a good number of MI300 hosts. AMD Strix Halo (and the Intel equivalent?) will be out for high memory jobs locally. Intel Falcon Shores and who knows will finally be coming out, and from the looks of it the software ecosystem will be at least a little more hardware agnostic.

  • “XeGPU dialect provides an abstraction that closely models Xe instructions.”

    How is that an abstraction? It sounds more like a representation.

  • could someone eli5 about what this means for engineers working on systems from an app perspective / higher level perspective?

    (have worked extensively with tf / pytorch)

  • Weird when there's no codegen for it in llvm. I guess the idea is to use MLIR with a toolchain built from intel's GitHub.

  • https://discourse.llvm.org/t/rfc-add-xegpu-dialect-for-intel... :

    > XeGPU dialect models a subset of Xe GPU’s unique features focusing on GEMM performance. The operations include 2d load, dpas, atomic, scattered load, 1d load, named barrier, mfence, and compile-hint. These operations provide a minimum set to support high-performance MLIR GEMM implementation for a wide range of GEMM shapes. XeGPU dialect complements Arith, Math, Vector, and Memref dialects. This allows XeGPU based MLIR GEMM implementation fused with other operations lowered through existing MLIR dialects.

  • Not the way to do this.

    Accelerators already have a common middle layer.

    https://discourse.llvm.org/t/rfc-introducing-llvm-project-of...

  • [dead]