Discussions from last week: https://news.ycombinator.com/item?id=12309590
Not that there was anything constructive, since most people did not read the article & ksummit email.
I'm not a kernel engineer or an operating systems engineer so this may be a naive question. This line was confusing to me:
> This is achieved using local clocks at each peer that are synchronized in a manner similar to that used for Lamport clocks.
From Linux's perspective, it's just running on one machine with one system / module that determines the time of the machine right? Why do two peers need to have their local logical clock?
From my understanding, clocks are a problem for distributed systems that can't share a single clock without significant latency and loss of availability. Why is this a problem for Linux as well?
Part of the reason OS IPC is stagnant is that it's not that interesting or useful.
I feel like the folks making these and other (kdbus) proposals are still thinking in terms of a "box." Boxes are dying. Almost everything interesting these days is distributed across networks with boxes being mere commodity hosts.
This is true in any modern data center or cloud infrastructure, and here it's often done via things like Kubernetes plus a pub sub bus and a clustered database.
It's also IMHO going to be true of any future next generation "decentralized" personal computing platform. All the interesting work I see in that space also takes the form of distributed systems that try to abstract away the box and deliver a kind of formless global cloud where boundaries are cryptographic.
In both cases the box is dying because it is unreliable and doesn't scale.
The trouble with local IPC is that its APIs are not going to be the same as network IPC. As a result programmers have to think about local vs remote to make use of it. So they won't.
That leaves only the local system services use case, which is already served.
Previous discussion: https://news.ycombinator.com/item?id=12309590