Amazon Titan Text Premier LLM with SOTA Common Sense Reasoning

  • UPDATE: As Onawa points out in his comment below, the OP is showing benchmarks.

    ---

    I couldn't find any mention of model performance on standard benchmarks, nor any mention of model scale (number of parameters, MoE setup, etc.).

    How come? Does Amazon not want customers to know how much better/worse, or much larger/smaller, this model is compared to other models, proprietary and open?