0. You're spot on: data centers almost exclusively run Linux.
1. CUDA runs well on Linux, but it's not real clear to me what you're asking in the second half of this question. CUDA is an SDK and driver that nvidia produces so that developers can write and run software on their specialized hardware. You wouldn't really run anything on nvidia's hardware without it. You also wouldn't really run anything independent of the OS.
2. Yes, when deploying a new model in a datacenter setting you'd use a linux terminal. Any custom tools used in a datacenter will be command line tools.
3. Data centers typically run CentOS, RHEL, debian, or ubuntu. CentOS if you're a large tech company. RHEL if you're not (ie: you're a car company looking to offload some liability on red hat when there are technical problems). Debian or ubuntu if you're a tech startup. Big cloud providers typically provide their own distro like Amazon Linux or Oracle Linux (both of which are based on RHEL) for ease of use and support, but you can run whatever distro you'd like. If you're running something in a container you might use a more lightweight, security focused distro like Alpine.
For an AI datacenter, I'd expect to see RHEL, debian, or ubuntu because nvidia officially supports them. You'd need a very good reason to put resources into customizing a distro or using an oddball distro.
4. Working on anything in a datacenter is done through a terminal, but that doesn't mean the computer you're using doesn't have a GUI. People sit at Linux, Windows, and Mac desktops with terminals connected to remote machines in the datacenter. How they interact with the datacenter is CLI only, but they've got a gui because they have other stuff open like a web browser, slack, or an IDE.
0. You're spot on: data centers almost exclusively run Linux.
1. CUDA runs well on Linux, but it's not real clear to me what you're asking in the second half of this question. CUDA is an SDK and driver that nvidia produces so that developers can write and run software on their specialized hardware. You wouldn't really run anything on nvidia's hardware without it. You also wouldn't really run anything independent of the OS.
2. Yes, when deploying a new model in a datacenter setting you'd use a linux terminal. Any custom tools used in a datacenter will be command line tools.
3. Data centers typically run CentOS, RHEL, debian, or ubuntu. CentOS if you're a large tech company. RHEL if you're not (ie: you're a car company looking to offload some liability on red hat when there are technical problems). Debian or ubuntu if you're a tech startup. Big cloud providers typically provide their own distro like Amazon Linux or Oracle Linux (both of which are based on RHEL) for ease of use and support, but you can run whatever distro you'd like. If you're running something in a container you might use a more lightweight, security focused distro like Alpine.
For an AI datacenter, I'd expect to see RHEL, debian, or ubuntu because nvidia officially supports them. You'd need a very good reason to put resources into customizing a distro or using an oddball distro.
4. Working on anything in a datacenter is done through a terminal, but that doesn't mean the computer you're using doesn't have a GUI. People sit at Linux, Windows, and Mac desktops with terminals connected to remote machines in the datacenter. How they interact with the datacenter is CLI only, but they've got a gui because they have other stuff open like a web browser, slack, or an IDE.
Hope that helps!