> Promising long-term ABI stability would prevent us from fixing mistakes and providing best in class performance. So, we make no such promises.
Wait NVidia actually get it? Neat!
It really is a tiny subset of the C++ standard library, but I'm happy to see they're continuing to expand it: https://nvidia.github.io/libcudacxx/api.html
For everyone wondering where are all the data structures and algorithms, vector and several algorithms are implemented by Thrust. https://docs.nvidia.com/cuda/thrust/index.html
Seems the big addition of the Libcu++ to Thrust would be synchronization.
Here's a somewhat related talk from CppCon '19: "The One-Decade Task: Putting std::atomic in CUDA"
This is super-cool.
For those of us who can't adopt it right away, note that you can compile your cuda code with `--expt-relaxed-constexpr` and call any constexpr function from device code. That includes all the constexpr functions in the standard library!
This gets you quite a bit, but not e.g. std::atomic, which is one of the big things in here.
Unfortunate name, "cu" it's the most well known slang for "anus" in Brazil (population: 200+ million). "Libcu++" is sure to cause snickering.
1. How do we know what parts of the library are usable on CUDA devices, and which are only usable in host-side code?
2. How compatible is this with libstdc++ and/or libcu++, when used independently?
I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.
Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class: https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl... ).
...
partial self-answer to (1.): https://nvidia.github.io/libcudacxx/api.html apparently only a small bit of the library is actually implemented.
Does this mean you can do operations on struct's that live on the GPU hardware?
I really do not understand why a (very good) hardware provider is willing to create/direct/hint custom software for the users.
Isn't this exactly what a GPU firmware is expected to do ? Why do they need to run software in the same memory space as my mail reader ?
A pathetic attempt to lock developers into their hardware.
“Whenever a new major CUDA Compute Capability is released, the ABI is broken. A new NVIDIA C++ Standard Library ABI version is introduced and becomes the default and support for all older ABI versions is dropped.”
https://github.com/NVIDIA/libcudacxx/blob/main/docs/releases...