> Kubernetes and its family of cloud native projects revolutionized computing in 4 short years.
This strikes me as wild hyperbole. It's a new management layer for server-side computing -- nothing compared to the changes brought by microprocessors, or even by minicomputers like the DEC PDP line.
I guess the most relevant question within an open source context is, "okay, but when/how do we get to play with this fancy new hardware?"
OpenPOWER remains a massive niche - as in, it's big, but it's still a niche - because nobody (in the sense of "just anybody", ie the long tail) can really get their hands on it. It's nontrivially difficult to maintain access to IBM POWER8 kit, and the Talos is also certainly way beyond what I could personally afford from a "directionless tinkering/learning" standpoint.
If I understand this article correctly, cloud-native means there's not likely to be a Talos equivalent for sale, and it's all remote access only.
Also, OpenPOWER is, like, an entire CPU, with a design is extremely old and can be expected to stay around for a long time, and even with that sort of centralizable focus opportunity it's still a niche.
I get the impression this is suggesting the creation of custom components with somewhat shorter design lifecycles - years, certainly, but not multiples of decades, and maybe only months for individual hardware revisions.
If this really wants to attract developers from outside of the immediate focus of the relevant industries... how are the discoverability and accessibility equations going to be solved?
Of course, the more potential cooks you attract to the kitchen the more overheads you have to deal with, but I wonder if is a necessary element to maintain interest and familiarity with what would apparently prefer to be a fast-changing environment.
I am unclear of the article's target.
Your article implies we are reading about HW manufacturers that have prioritization & work load issues, but then you mention Apple, AWS, et al. These HW designs are all directed work to the HW manufacturer. There is no concerns of prioritization, or work load. They get paid handsomely for making the right choices.
This article didn't mention PIM (processing in memory) which is a way of keeping the established general purpose CPU model and accelerating it by simply adding processors directly into RAM. The compute power scales with the size of your dataset. You also benefit from greater memory bandwidth and lower power consumption.
Here is an existing implementation: https://www.upmem.com/