AMD XDNA – Versal AI Core / 2nd-gen AIE-ML tiles
Are these programmable by the end-user? The "software programmability" section describes "Vitis AI" frameworks supported. But can we write our own software on these?
Is this card FPGA-based?
EDIT: [1] more info on the AI-engine tiles: scalar cores + "adaptable hardware (FPGA?)" + {AI+DSP}.
[1] https://www.xilinx.com/products/technology/ai-engine.html
If this is based on fpga tech (xilinx) I don't think it will have a cost/benefit edge over asics. Why not do their own TPU like Google did? Nowadays even embedded MCUs come with AI accelerators (last I heard was 5TOPS in a banana pi-cm4 board - that is sufficient for object detection stuff and perhaps even more).
No price is listed on their site so I'm assuming its gonna be stupid expensive, but if anyone knows would you mind posting?
> High-Density Video Decoder**: 96 channels of 1920x1080p
> [...]
> **: @10 fps, H.264/H.265
Is 10 fps a standard measure for this kind of thing?
Every time I hear about an AI accelerator, I get really excited, then it turns out to be inference only.
Did I miss something did AMD buy Xilinx? Makes sense I suppose after Intel bought Altera.
Who owns lattice?
How much memory does it have?
What TOPS means exactly in "... TOPS*|(INT8) 404 ..." ?
Sad to see amds ROCm efforts essentially abandoned. They were close to universal interop for cudnn and cuda on amd (and other!) Architectures.
Hopefully Intel takes a stab at it with their ARC line out now.
Douglas Adams said we'd have robots to watch TV for us. That seems to be the designed use case for this.
16gb RAM / 96 video channels ... I haven't done any of that work but it feels like they expect that "96" not to be fully used in practice.
I won't even take a look at the numbers unless they show a PyTorch model running on it, the problem is the big disconnect between HW and SW, realistically, have you ever seen any off-the shelf model running on something other than NVidia?
This is inference only. AMD should invest into the full AI stack starting from training. For this they need a product comparable to NVIDIA 4090, so that entry level researchers could use their hardware. Honestly, I don't know why AMD aren't doing that already, they are best positioned to do that in the industry landscape.