Interesting factoid: a codec called TrueMotion by the Duck Corporation encoded cut scenes for a number of games in that period, on PC and Sega Saturn+Dreamcast+3DO. This company later changed its name to On2, and developed the VP3/VP8 generation of codecs that were ancestors of AV1 (Google's open-source codec) (Disclaimer: I was founder/CTO)
Off the top of my head, we did intro & cutscenes for these titles/platforms (this is a small subset):
Spycraft | Activision PC
Gexx | Crystal Dynamics 3DO
The Horde | Crystal Dynamics 3DO
Street Fighter II | Sega
Final Fantasy 7 | Square (PC +/or PS2??)
There were at least 20 others
One key factor which many commenters seem to be oblivious to is nurbs vs. polygons. Maya in the 90's had excellent nurbs modeling tools which made it specifically well suited for film work which required highly detailed organic models. 3D Studio Max on the other hand was all about polygons which made it the weapon of choice for video game studios which had to squeeze every last bit of performance from the game engines and PC's of that era. It wasn't until the mid aughts that Maya incorporated sub-division surfaces when it's use really took off in the games industry. Japan as it's often the case was a bit of an outlier as they were a pretty big market for Softimage/XSI. Valve being valve was also a Softimage customer. They even partnered with Autodesk to release a custom XSI version specifically for modding their own games. https://developer.valvesoftware.com/wiki/Softimage_Mod_Tool
Something to consider generally about 90's game production is the interplay between the in-game assets, the cutscenes and the production models that were evolving at the time.
The software itself is a factor in this - and there was hardware, too, a lot of Silicon Graphics workstations were used in the mid-90's - but the device constraints at play dictated the idea of pre-rendering 3D assets for games like the Diablos and Resident Evils, which in turn made it easier to consider reusing them for FMV. That in turn produced the "parallel pipelines" mentioned in omershapira's comment whenever the engine was actually capable of 3D: often games were pitched to publishers by producing a cutscene trailer, and then the development team figured out what the engine tech could actually do as they went along. Because the in-game assets were still very basic and produced relatively cheaply given a design spec, this served a combination of development and marketing goals. Lara Croft got on all the magazine covers because of the high-poly CGI, not her in-game version.
(Why would publishers focus on assets? In this period, acquisition was extremely common as the industry got bigger and financing new projects got more risky, and so publishers gravitated towards a practice of shopping around for IP development and staffing at a low price. What they were banking on was not getting just one hit or a powerful engine, but a franchise that would sell well for years and experienced developers that they could put on their own projects. Likewise, studios were hungry to get publisher support and their heads often settled for an acquisition rather than shutting down. Focusing on asset production was a way of meeting in the middle, since the tech was so often an unknown; if you acquired a team that could make good assets, then plugged them in with an in-house tech team, a product could be made.)
According to a postmortem on Gamasutra: "Almost all of Diablo II's in-game and cinematic art was constructed and rendered in 3D Studio Max..." They then used Rad Game Tools Bink to encode those renders and optimize them. https://www.gamedeveloper.com/design/postmortem-blizzard-s-i...
On SGI it was TAV (The Advanced Visualizer) from Wavefront. Wavefront bought TDI Explore which was used by a ton of studios. Wavefront then released Kinematic and Dynamation which were both game changers for IK and particles. Softimage was also big, which was bought by Microsoft. Houdini was very powerful but expensive and hard to use. SGI bought Alias Research and Wavefront and merged them together. That's where Maya came from. Autodesk eventually got all of the AW assets after SGI's first bankruptcy.
Same 3D-modelling and -animation software that's used today, most have even survived one way or another: 3DSMax, Maya (or its predecessors Wavefront and PowerAnimator), Houdini, SoftImage, Cinema4D, Lightwave3D, Real3D etc... in the early 90's the hardware would be either Silicon Graphics workstations or Amigas, by the end of the decade everything had moved to PC.
Cutscene creation was usually outsourced to dedicated studios because it was completely disconnected from the actual game development process.
There's a great series from the Corridor team talking to the artists involved in just this topic
I remember bink video was a fairly common tech for a number of years.
Total guess, but maybe 3DS Max?
FWIW, quite a few of the collaborators on Love Death + Robots come from the video game cinematic world including Blur, the main producers of the series.
Blur for one heavily uses 3DS Max.
If the game studio made the cut scenes themselves, they used usually both Maya and 3DS.
If they were made externally by a 3rd party production studio that also worked for TV and Cinema, they probably used some SGI workstation running Maya or Softimage 3D or Lightwave.
Not 1990s but close: World of Warcraft cut scenes in 2002-2005 were rendered using 3DS Max using I think Brazil R/S renderer and for sure the Deadline render manager (as I sold them this.). At some point around 2008 I think they switched to Maya and Arnold and Tractor?
Good question! In the 90s/00s it was quite visibly different from in-game material.
I remember 3dstudio (followed 3ds max) being talked about in the (then paper) press.
I wonder what they used for Witcher 1 (sorry not really 90s but awesome intro)
I would love to read more about the Ninja Gaiden cutscenes.
3D studio or lightwave 3D?
Blizzard was a Maya (and it's predecessor PowerAnimator) shop in the 90s and the RE3 cinematics were done with lightwave. Lightwave pivoted from general cg to more of a architectural focus in the last decade.
[Disclaimer: I did not work in the video games industry in the 90s, but I did work in VFX studios hired to do cutscenes in the mid-00s]
Before engine-compatible assets were good enough to render in-engine, cinematics that involved character animation were authored with an entirely parallel path, using the same reference art, in a character animation program. Maya and 3d Studio Max were popular at the time and most reference art and lookdev was done with them anyway, so that was a popular choice, and Softimage (RIP) was an artist favorite in offline animation too.
Sometimes making the cinematics wasn't a core competence of the studio working on the game, so VFX or animation studios would be contracted to do this. Often this meant the studio was already set up to work for TV or Film, so it was staffed such that a 3D render would reach a compositor relatively quickly, where many manual corrections would happen in 2D space. Compositing software has largely consolidated nowadays to the few survivors (Nuke, Flame mostly) but at the time there were many, like Combustion and Henry (Quantel), and even After Effects was used a bunch in places I worked.
To this day, not all engines love creating cinematics, because even if an external/offline renderer can do the full render (like in Unreal Engine), some engines don't support the animation systems required to do cinematics, or don't support them at a velocity artists like. In other words, the same software workflows are used today.